Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
The effects of auditory stimulation with music on heart rate variability in healthy women.
Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de
2013-07-01
There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.
The effects of auditory stimulation with music on heart rate variability in healthy women
Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos
2013-01-01
OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
ERIC Educational Resources Information Center
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
2000-01-01
Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
Single-Station Sigma for the Iranian Strong Motion Stations
NASA Astrophysics Data System (ADS)
Zafarani, H.; Soghrat, M. R.
2017-11-01
In development of ground motion prediction equations (GMPEs), the residuals are assumed to have a log-normal distribution with a zero mean and a standard deviation, designated as sigma. Sigma has significant effect on evaluation of seismic hazard for designing important infrastructures such as nuclear power plants and dams. Both aleatory and epistemic uncertainties are involved in the sigma parameter. However, ground-motion observations over long time periods are not available at specific sites and the GMPEs have been derived using observed data from multiple sites for a small number of well-recorded earthquakes. Therefore, sigma is dominantly related to the statistics of the spatial variability of ground motion instead of temporal variability at a single point (ergodic assumption). The main purpose of this study is to reduce the variability of the residuals so as to handle it as epistemic uncertainty. In this regard, it is tried to partially apply the non-ergodic assumption by removing repeatable site effects from total variability of six GMPEs driven from the local, Europe-Middle East and worldwide data. For this purpose, we used 1837 acceleration time histories from 374 shallow earthquakes with moment magnitudes ranging from M w 4.0 to 7.3 recorded at 370 stations with at least two recordings per station. According to estimated single-station sigma for the Iranian strong motion stations, the ratio of event-corrected single-station standard deviation ( Φ ss) to within-event standard deviation ( Φ) is about 0.75. In other words, removing the ergodic assumption on site response resulted in 25% reduction of the within-event standard deviation that reduced the total standard deviation by about 15%.
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen
2011-01-01
Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.
Braithwaite, Susan S; Umpierrez, Guillermo E; Chase, J Geoffrey
2013-09-01
Group metrics are described to quantify blood glucose (BG) variability of hospitalized patients. The "multiplicative surrogate standard deviation" (MSSD) is the reverse-transformed group mean of the standard deviations (SDs) of the logarithmically transformed BG data set of each patient. The "geometric group mean" (GGM) is the reverse-transformed group mean of the means of the logarithmically transformed BG data set of each patient. Before reverse transformation is performed, the mean of means and mean of SDs each has its own SD, which becomes a multiplicative standard deviation (MSD) after reverse transformation. Statistical predictions and comparisons of parametric or nonparametric tests remain valid after reverse transformation. A subset of a previously published BG data set of 20 critically ill patients from the first 72 h of treatment under the SPRINT protocol was transformed logarithmically. After rank ordering according to the SD of the logarithmically transformed BG data of each patient, the cohort was divided into two equal groups, those having lower or higher variability. For the entire cohort, the GGM was 106 (÷/× 1.07) mg/dl, and MSSD was 1.24 (÷/× 1.07). For the subgroups having lower and higher variability, respectively, the GGM did not differ, 104 (÷/× 1.07) versus 109 (÷/× 1.07) mg/dl, but the MSSD differed, 1.17 (÷/× 1.03) versus 1.31 (÷/× 1.05), p = .00004. By using the MSSD with its MSD, groups can be characterized and compared according to glycemic variability of individual patient members. © 2013 Diabetes Technology Society.
Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio
2014-06-01
Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P < 0.01, 0.05, and 0.001, respectively) decreased in women with climacteric symptoms. There was a negative correlation between the standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.
Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-07-01
To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.
Concistrè, A; Grillo, A; La Torre, G; Carretta, R; Fabris, B; Petramala, L; Marinelli, C; Rebellato, A; Fallo, F; Letizia, C
2018-04-01
Primary hyperparathyroidism is associated with a cluster of cardiovascular manifestations, including hypertension, leading to increased cardiovascular risk. The aim of our study was to investigate the ambulatory blood pressure monitoring-derived short-term blood pressure variability in patients with primary hyperparathyroidism, in comparison with patients with essential hypertension and normotensive controls. Twenty-five patients with primary hyperparathyroidism (7 normotensive,18 hypertensive) underwent ambulatory blood pressure monitoring at diagnosis, and fifteen out of them were re-evaluated after parathyroidectomy. Short-term-blood pressure variability was derived from ambulatory blood pressure monitoring and calculated as the following: 1) Standard Deviation of 24-h, day-time and night-time-BP; 2) the average of day-time and night-time-Standard Deviation, weighted for the duration of the day and night periods (24-h "weighted" Standard Deviation of BP); 3) average real variability, i.e., the average of the absolute differences between all consecutive BP measurements. Baseline data of normotensive and essential hypertension patients were matched for age, sex, BMI and 24-h ambulatory blood pressure monitoring values with normotensive and hypertensive-primary hyperparathyroidism patients, respectively. Normotensive-primary hyperparathyroidism patients showed a 24-h weighted Standard Deviation (P < 0.01) and average real variability (P < 0.05) of systolic blood pressure higher than that of 12 normotensive controls. 24-h average real variability of systolic BP, as well as serum calcium and parathyroid hormone levels, were reduced in operated patients (P < 0.001). A positive correlation of serum calcium and parathyroid hormone with 24-h-average real variability of systolic BP was observed in the entire primary hyperparathyroidism patients group (P = 0.04, P = 0.02; respectively). Systolic blood pressure variability is increased in normotensive patients with primary hyperparathyroidism and is reduced by parathyroidectomy, and may potentially represent an additional cardiovascular risk factor in this disease.
The gait standard deviation, a single measure of kinematic variability.
Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren
2016-05-01
Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
Christman, Stephen D; Weaver, Ryan
2008-05-01
The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.
A Note on the Estimator of the Alpha Coefficient for Standardized Variables Under Normality
ERIC Educational Resources Information Center
Hayashi, Kentaro; Kamata, Akihito
2005-01-01
The asymptotic standard deviation (SD) of the alpha coefficient with standardized variables is derived under normality. The research shows that the SD of the standardized alpha coefficient becomes smaller as the number of examinees and/or items increase. Furthermore, this research shows that the degree of the dependence of the SD on the number of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-17
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guangxing; Qian, Yun; Yan, Huiping
One limitation of most global climate models (GCMs) is that with the horizontal resolutions they typically employ, they cannot resolve the subgrid variability (SGV) of clouds and aerosols, adding extra uncertainties to the aerosol radiative forcing estimation. To inform the development of an aerosol subgrid variability parameterization, here we analyze the aerosol SGV over the southern Pacific Ocean simulated by the high-resolution Weather Research and Forecasting model coupled to Chemistry. We find that within a typical GCM grid, the aerosol mass subgrid standard deviation is 15% of the grid-box mean mass near the surface on a 1 month mean basis.more » The fraction can increase to 50% in the free troposphere. The relationships between the sea-salt mass concentration, meteorological variables, and sea-salt emission rate are investigated in both the clear and cloudy portion. Under clear-sky conditions, marine aerosol subgrid standard deviation is highly correlated with the standard deviations of vertical velocity, cloud water mixing ratio, and sea-salt emission rates near the surface. It is also strongly connected to the grid box mean aerosol in the free troposphere (between 2 km and 4 km). In the cloudy area, interstitial sea-salt aerosol mass concentrations are smaller, but higher correlation is found between the subgrid standard deviations of aerosol mass and vertical velocity. Additionally, we find that decreasing the model grid resolution can reduce the marine aerosol SGV but strengthen the correlations between the aerosol SGV and the total water mixing ratio (sum of water vapor, cloud liquid, and cloud ice mixing ratios).« less
First among Others? Cohen's "d" vs. Alternative Standardized Mean Group Difference Measures
ERIC Educational Resources Information Center
Cahan, Sorel; Gamliel, Eyal
2011-01-01
Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…
A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
1999-01-01
Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.
Variability in Wechsler Adult Intelligence Scale-IV subtest performance across age.
Wisdom, Nick M; Mignogna, Joseph; Collins, Robert L
2012-06-01
Normal Wechsler Adult Intelligence Scale (WAIS)-IV performance relative to average normative scores alone can be an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs with increasing age. The purpose of the present study is to characterize the patterns of raw score change and associated variability on WAIS-IV subtests across age groupings. Raw WAIS-IV subtest means and standard deviations for each age group were tabulated from the WAIS-IV normative manual along with the coefficient of variation (CV), a measure of score dispersion calculated by dividing the standard deviation by the mean and multiplying by 100. The CV further informs the magnitude of variability represented by each standard deviation. Raw mean scores predictably decreased across age groups. Increased variability was noted in Perceptual Reasoning and Processing Speed Index subtests, as Block Design, Matrix Reasoning, Picture Completion, Symbol Search, and Coding had CV percentage increases ranging from 56% to 98%. In contrast, Working Memory and Verbal Comprehension subtests were more homogeneous with Digit Span, Comprehension, Information, and Similarities percentage of the mean increases ranging from 32% to 43%. Little change in the CV was noted on Cancellation, Arithmetic, Letter/Number Sequencing, Figure Weights, Visual Puzzles, and Vocabulary subtests (<14%). A thorough understanding of age-related subtest variability will help to identify test limitations as well as further our understanding of cognitive domains which remain relatively steady versus those which steadily decline.
Bogucki, Sz; Noszczyk-Nowak, A
2017-03-28
Heart rate variability is an established risk factor for mortality in both healthy dogs and animals with heart failure. The aim of this study was to compare short-term heart rate variability (ST-HRV) parameters from 60-min electrocardiograms in dogs with sick sinus syndrome (SSS, n=20) or chronic mitral valve disease (CMVD, n=20) and healthy controls (n=50), and to verify the clinical application of ST-HRV analysis. The study groups differed significantly in terms of both time - and frequency- domain ST-HRV parameters. In the case of dogs with SSS and healthy controls, particularly evident differences pertained to HRV parameters linked directly to the variability of R-R intervals. Lower values of standard deviation of all R-R intervals (SDNN), standard deviation of the averaged R-R intervals for all 5-min segments (SDANN), mean of the standard deviations of all R-R intervals for all 5-min segments (SDNNI) and percentage of successive R-R intervals >50 ms (pNN50) corresponded to a decrease in parasympathetic regulation of heart rate in dogs with CMVD. These findings imply that ST-HRV may be useful for the identification of dogs with SSS and for detection of dysautonomia in animals with CMVD.
Impacts of temperature and its variability on mortality in New England
NASA Astrophysics Data System (ADS)
Shi, Liuhua; Kloog, Itai; Zanobetti, Antonella; Liu, Pengfei; Schwartz, Joel D.
2015-11-01
Rapid build-up of greenhouse gases is expected to increase Earth’s mean surface temperature, with unclear effects on temperature variability. This makes understanding the direct effects of a changing climate on human health more urgent. However, the effects of prolonged exposures to variable temperatures, which are important for understanding the public health burden, are unclear. Here we demonstrate that long-term survival was significantly associated with both seasonal mean values and standard deviations of temperature among the Medicare population (aged 65+) in New England, and break that down into long-term contrasts between ZIP codes and annual anomalies. A rise in summer mean temperature of 1 °C was associated with a 1.0% higher death rate, whereas an increase in winter mean temperature corresponded to a 0.6% decrease in mortality. Increases in standard deviations of temperature for both summer and winter were harmful. The increased mortality in warmer summers was entirely due to anomalies, whereas it was long-term average differences in the standard deviation of summer temperatures across ZIP codes that drove the increased risk. For future climate scenarios, seasonal mean temperatures may in part account for the public health burden, but the excess public health risk of climate change may also stem from changes of within-season temperature variability.
A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability
Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.
2012-01-01
Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Analysis of in-flight acoustic data for a twin-engined turboprop airplane
NASA Technical Reports Server (NTRS)
Wilby, J. F.; Wilby, E. G.
1988-01-01
Acoustic measurements were made on the exterior and interior of a general aviation turboprop airplane during four flight tests. The test conditions were carefully controlled and repeated for each flight in order to determine data variability. For the first three flights the cabin was untreated and for the fourth flight the fuselage was treated with glass fiber batts. On the exterior, measured propeller harmonic sound pressure levels showed typical standard deviations of +1.4 dB, -2.3 dB, and turbulent boundary layer pressure levels, +1.2 dB, -1.6. Propeller harmonic levels in the cabin showed greater variability, with typical standard deviations of +2.0 dB, -4.2 dB. When interior sound pressure levels from different flights with different cabin treatments were used to evaluate insertion loss, the standard deviations were typically plus or minus 6.5 dB. This is due in part to the variability of the sound pressure level measurements, but probably is also influenced by changes in the model characteristics of the cabin. Recommendations are made for the planning and performance of future flight tests to measure interior noise of propeller-driven aircraft, either high-speed advanced turboprop or general aviation propellers.
Evaluation of Two New Indices of Blood Pressure Variability Using Postural Change in Older Fallers.
Goh, Choon-Hian; Ng, Siew-Cheok; Kamaruzzaman, Shahrul B; Chin, Ai-Vyrn; Poi, Philip J H; Chee, Kok Han; Imran, Z Abidin; Tan, Maw Pin
2016-05-01
To evaluate the utility of blood pressure variability (BPV) calculated using previously published and newly introduced indices using the variables falls and age as comparators.While postural hypotension has long been considered a risk factor for falls, there is currently no documented evidence on the relationship between BPV and falls.A case-controlled study involving 25 fallers and 25 nonfallers was conducted. Systolic (SBPV) and diastolic blood pressure variability (DBPV) were assessed using 5 indices: standard deviation (SD), standard deviation of most stable continuous 120 beats (staSD), average real variability (ARV), root mean square of real variability (RMSRV), and standard deviation of real variability (SDRV). Continuous beat-to-beat blood pressure was recorded during 10 minutes' supine rest and 3 minutes' standing.Standing SBPV was significantly higher than supine SBPV using 4 indices in both groups. The standing-to-supine-BPV ratio (SSR) was then computed for each subject (staSD, ARV, RMSRV, and SDRV). Standing-to-supine ratio for SBPV was significantly higher among fallers compared to nonfallers using RMSRV and SDRV (P = 0.034 and P = 0.025). Using linear discriminant analysis (LDA), 3 indices (ARV, RMSRV, and SDRV) of SSR SBPV provided accuracies of 61.6%, 61.2%, and 60.0% for the prediction of falls which is comparable with timed-up and go (TUG), 64.4%.This study suggests that SSR SBPV using RMSRV and SDRV is a potential predictor for falls among older patients, and deserves further evaluation in larger prospective studies.
Hendriks, A Jan; Awkerman, Jill A; de Zwart, Dick; Huijbregts, Mark A J
2013-11-01
While variable sensitivity of model species to common toxicants has been addressed in previous studies, a systematic analysis of inter-species variability for different test types, modes of action and species is as of yet lacking. Hence, the aim of the present study was to identify similarities and differences in contaminant levels affecting cold-blooded and warm-blooded species administered via different routes. To that end, data on lethal water concentrations LC50, tissue residues LR50 and oral doses LD50 were collected from databases, each representing the largest of its kind. LC50 data were multiplied by a bioconcentration factor (BCF) to convert them to internal concentrations that allow for comparison among species. For each endpoint data set, we calculated the mean and standard deviation of species' lethal level per compound. Next, the means and standard deviations were averaged by mode of action. Both the means and standard deviations calculated depended on the number of species tested, which is at odds with quality standard setting procedures. Means calculated from (BCF) LC50, LR50 and LD50 were largely similar, suggesting that different administration routes roughly yield similar internal levels. Levels for compounds interfering biochemically with elementary life processes were about one order of magnitude below that of narcotics disturbing membranes, and neurotoxic pesticides and dioxins induced death in even lower amounts. Standard deviations for LD50 data were similar across modes of action, while variability of LC50 values was lower for narcotics than for substances with a specific mode of action. The study indicates several directions to go for efficient use of available data in risk assessment and reduction of species testing. Copyright © 2013 Elsevier Inc. All rights reserved.
Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.
Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
NASA Astrophysics Data System (ADS)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-01
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.
PERFORMANCE OF TRICKLING FILTER PLANTS: RELIABILITY, STABILITY, VARIABILITY
Effluent quality variability from trickling filters was examined in this study by statistically analyzing daily effluent BOD5 and suspended solids data from 11 treatment plants. Summary statistics (mean, standard deviation, etc.) were examined to determine the general characteris...
A Computer Program for Preliminary Data Analysis
Dennis L. Schweitzer
1967-01-01
ABSTRACT. -- A computer program written in FORTRAN has been designed to summarize data. Class frequencies, means, and standard deviations are printed for as many as 100 independent variables. Cross-classifications of an observed dependent variable and of a dependent variable predicted by a multiple regression equation can also be generated.
Scenarios for Motivating the Learning of Variability: An Example in Finances
ERIC Educational Resources Information Center
Cordani, Lisbeth K.
2013-01-01
This article explores an example in finances in order to motivate the random variable learning to the very beginners in statistics. In addition, it offers a relationship between standard deviation and range in a very specific situation.
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
Farabi, Sarah S; Carley, David W; Smith, Donald; Quinn, Lauretta
2015-09-01
We measured the effects of a single bout of exercise on diurnal and nocturnal oxidative stress and glycaemic variability in obese subjects with type 2 diabetes mellitus or impaired glucose tolerance versus obese healthy controls. Subjects (in random order) performed either a single 30-min bout of moderate-intensity exercise or remained sedentary for 30 min at two separate visits. To quantify glycaemic variability, standard deviation of glucose (measured by continuous glucose monitoring system) and continuous overlapping net glycaemic action of 1-h intervals (CONGA-1) were calculated for three 12-h intervals during each visit. Oxidative stress was measured by 15-isoprostane F(2t) levels in urine collections for matching 12-h intervals. Exercise reduced daytime glycaemic variability (ΔCONGA-1 = -12.62 ± 5.31 mg/dL, p = 0.04) and urinary isoprostanes (ΔCONGA-1 = -0.26 ± 0.12 ng/mg, p = 0.04) in the type 2 diabetes mellitus/impaired glucose tolerance group. Daytime exercise-induced change in urinary 15-isoprostane F(2t) was significantly correlated with both daytime standard deviation (r = 0.68, p = 0.03) and with subsequent overnight standard deviation (r = 0.73, p = 0.027) in the type 2 diabetes mellitus/impaired glucose tolerance group. Exercise significantly impacts the relationship between diurnal oxidative stress and nocturnal glycaemic variability in individuals with type 2 diabetes mellitus/impaired glucose tolerance. © The Author(s) 2015.
Zhao, Pengxiang; Zhou, Suhong
2018-01-01
Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals’ activity space. First, a survey was conducted to collect individuals’ daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment. PMID:29439392
Mavilio, Alberto; Sisto, Dario; Ferreri, Paolo; Cardascia, Nicola; Alessio, Giovanni
2017-01-01
A significant variability of the second harmonic (2ndH) phase of steady-state pattern electroretinogram (SS-PERG) in intrasession retest has been recently described in glaucoma patients (GP), which has not been found in healthy subjects. To evaluate the reliability of phase variability in retest (a procedure called RE-PERG or REPERG) in the presence of cataract, which is known to affect standard PERG, we tested this procedure in GP, normal controls (NC), and cataract patients (CP). The procedure was performed on 50 GP, 35 NC, and 27 CP. All subjects were examined with RE-PERG and SS-PERG and also with spectral domain optical coherence tomography and standard automated perimetry. Standard deviation of phase and amplitude value of 2ndH were correlated by means of one-way analysis of variance and Pearson correlation, with the mean deviation and pattern standard deviation assessed by standard automated perimetry and retinal nerve fiber layer and the ganglion cell complex thickness assessed by spectral domain optical coherence tomography. Receiver operating characteristics were calculated in cohort populations with and without cataract. Standard deviation of phase of 2ndH was significantly higher in GP with respect to NC ( P <0.001) and CP ( P <0.001), and it correlated with retinal nerve fiber layer ( r =-0.5, P <0.001) and ganglion cell complex ( r =-0.6, P <0.001) defects in GP. Receiver operating characteristic evaluation showed higher specificity of RE-PERG (86.4%; area under the curve 0.93) with respect to SS-PERG (54.5%; area under the curve 0.68) in CP. RE-PERG may improve the specificity of SS-PERG in clinical practice in the discrimination of GP.
Polarimetric measures of selected variable stars
NASA Astrophysics Data System (ADS)
Elias, N. M., II; Koch, R. H.; Pfeiffer, R. J.
2008-10-01
Aims: The purpose of this paper is to summarize and interpret unpublished optical polarimetry for numerous program stars that were observed over the past decades at the Flower and Cook Observatory (FCO), University of Pennsylvania. We also make the individual calibrated measures available for long-term comparisons with new data. Methods: We employ three techniques to search for intrinsic variability within each dataset. First, when the observations for a given star and filter are numerous enough and when a period has been determined previously via photometry or spectroscopy, the polarimetric measures are plotted versus phase. If a statistically significant pattern appears, we attribute it to intrinsic variability. Second, we compare means of the FCO data to means from other workers. If they are statistically different, we conclude that the object exhibits long-term intrinsic variability. Third, we calculate the standard deviation for each program star and filter and compare it to the standard deviation estimated from comparable polarimetric standards. If the standard deviation of the program star is at least three times the value estimated from the polarimetric standards, the former is considered intrinsically variable. All of these statements are strengthened when variability appears in multiple filters. Results: We confirm the existence of an electron-scattering cloud at L1 in the β Per system, and find that LY Aur and HR 8281 possess scattering envelopes. Intrinsic polarization was detected for Nova Cas 1993 as early as day +3. We detected polarization variability near the primary eclipse of 32 Cyg. There is marginal evidence for polarization variability of the β Cepheid type star γ Peg. The other objects of this class exhibited no variability. All but one of the β Cepheid objects (ES Vul) fall on a tight linear relationship between linear polarization and E(B-V), in spite of the fact that the stars lay along different lines of sight. This dependence falls slightly below the classical upper limit of Serkowski, Mathewson, and Ford. The table, which contains the polarization observations of the program stars discussed in this paper, is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/489/911
High-Throughput RNA Interference Screening: Tricks of the Trade
Nebane, N. Miranda; Coric, Tatjana; Whig, Kanupriya; McKellip, Sara; Woods, LaKeisha; Sosa, Melinda; Sheppard, Russell; Rasmussen, Lynn; Bjornsti, Mary-Ann; White, E. Lucile
2016-01-01
The process of validating an assay for high-throughput screening (HTS) involves identifying sources of variability and developing procedures that minimize the variability at each step in the protocol. The goal is to produce a robust and reproducible assay with good metrics. In all good cell-based assays, this means coefficient of variation (CV) values of less than 10% and a signal window of fivefold or greater. HTS assays are usually evaluated using Z′ factor, which incorporates both standard deviation and signal window. A Z′ factor value of 0.5 or higher is acceptable for HTS. We used a standard HTS validation procedure in developing small interfering RNA (siRNA) screening technology at the HTS center at Southern Research. Initially, our assay performance was similar to published screens, with CV values greater than 10% and Z′ factor values of 0.51 ± 0.16 (average ± standard deviation). After optimizing the siRNA assay, we got CV values averaging 7.2% and a robust Z′ factor value of 0.78 ± 0.06 (average ± standard deviation). We present an overview of the problems encountered in developing this whole-genome siRNA screening program at Southern Research and how equipment optimization led to improved data quality. PMID:23616418
Zhu, Y Q; Long, Q; Xiao, Q F; Zhang, M; Wei, Y L; Jiang, H; Tang, B
2018-03-13
Objective: To investigate the association of blood pressure variability and sleep stability in essential hypertensive patients with sleep disorder by cardiopulmonary coupling. Methods: Performed according to strict inclusion and exclusion criteria, 88 new cases of essential hypertension who came from the international department and the cardiology department of china-japan friendship hospital were enrolled. Sleep stability and 24 h ambulatory blood pressure data were collected by the portable sleep monitor based on cardiopulmonary coupling technique and 24 h ambulatory blood pressure monitor. Analysis the correlation of blood pressure variability and sleep stability. Results: In the nighttime, systolic blood pressure standard deviation, systolic blood pressure variation coefficient, the ratio of the systolic blood pressure minimum to the maximum, diastolic blood pressure standard deviation, diastolic blood pressure variation coefficient were positively correlated with unstable sleep duration ( r =0.185, 0.24, 0.237, 0.43, 0.276, P <0.05). Conclusions: Blood pressure variability is associated with sleep stability, especially at night, the longer the unstable sleep duration, the greater the variability in night blood pressure.
Variability estimation of urban wastewater biodegradable fractions by respirometry.
Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie
2005-11-01
This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.
Autonomic regulation in fetuses with Congenital Heart Disease
Siddiqui, Saira; Wilpers, Abigail; Myers, Michael; Nugent, J. David; Fifer, William P.; Williams, Ismée A.
2015-01-01
Background Exposure to antenatal stressors affects autonomic regulation in fetuses. Whether the presence of congenital heart disease (CHD) alters the developmental trajectory of autonomic regulation is not known. Aims/Study Design This prospective observational cohort study aimed to further characterize autonomic regulation in fetuses with CHD; specifically hypoplastic left heart syndrome (HLHS), transposition of the great arteries (TGA), and tetralogy of Fallot (TOF). Subjects From 11/2010 – 11/2012, 92 fetuses were enrolled: 41 controls and 51 with CHD consisting of 19 with HLHS, 12 with TGA, and 20 with TOF. Maternal abdominal fetal electrocardiogram (ECG) recordings were obtained at 3 gestational ages: 19-27 weeks (F1), 28-33 weeks (F2), and 34-38 weeks (F3). Outcome measures Fetal ECG was analyzed for mean heart rate along with 3 measures of autonomic variability of the fetal heart rate: interquartile range, standard deviation, and root mean square of the standard deviation of the heart rate (RMSSD), a measure of parasympathetic activity. Results During F1 and F2 periods, HLHS fetuses demonstrated significantly lower mean HR than controls (p<0.05). Heart rate variability at F3, as measured by standard deviation, interquartile range, and RMSSD was lower in HLHS than controls (p<0.05). Other CHD subgroups showed a similar, though non-significant trend towards lower variability. Conclusions Autonomic regulation in CHD fetuses differs from controls with HLHS fetuses most markedly affected. PMID:25662702
Autonomic regulation in fetuses with congenital heart disease.
Siddiqui, Saira; Wilpers, Abigail; Myers, Michael; Nugent, J David; Fifer, William P; Williams, Ismée A
2015-03-01
Exposure to antenatal stressors affects autonomic regulation in fetuses. Whether the presence of congenital heart disease (CHD) alters the developmental trajectory of autonomic regulation is not known. This prospective observational cohort study aimed to further characterize autonomic regulation in fetuses with CHD; specifically hypoplastic left heart syndrome (HLHS), transposition of the great arteries (TGA), and tetralogy of Fallot (TOF). From 11/2010 to 11/2012, 92 fetuses were enrolled: 41 controls and 51 with CHD consisting of 19 with HLHS, 12 with TGA, and 20 with TOF. Maternal abdominal fetal electrocardiogram (ECG) recordings were obtained at 3 gestational ages: 19-27 weeks (F1), 28-33 weeks (F2), and 34-38 weeks (F3). Fetal ECG was analyzed for mean heart rate along with 3 measures of autonomic variability of the fetal heart rate: interquartile range, standard deviation, and root mean square of the standard deviation of the heart rate (RMSSD), a measure of parasympathetic activity. During F1 and F2 periods, HLHS fetuses demonstrated significantly lower mean HR than controls (p<0.05). Heart rate variability at F3, as measured by standard deviation, interquartile range, and RMSSD was lower in HLHS than controls (p<0.05). Other CHD subgroups showed a similar, though non-significant trend towards lower variability. Autonomic regulation in CHD fetuses differs from controls, with HLHS fetuses most markedly affected. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Leka, K. D.; Barnes, G.
2003-10-01
We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.
Directional Dependence in Developmental Research
ERIC Educational Resources Information Center
von Eye, Alexander; DeShon, Richard P.
2012-01-01
In this article, we discuss and propose methods that may be of use to determine direction of dependence in non-normally distributed variables. First, it is shown that standard regression analysis is unable to distinguish between explanatory and response variables. Then, skewness and kurtosis are discussed as tools to assess deviation from…
Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.
2009-01-01
The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…
Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel
2014-01-01
Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
Evaluation of internal noise methods for Hotelling observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2005-04-01
Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.
Prentice, J C; Pizer, S D; Conlin, P R
2016-12-01
To characterize the relationship between HbA 1c variability and adverse health outcomes among US military veterans with Type 2 diabetes. This retrospective cohort study used Veterans Affairs and Medicare claims for veterans with Type 2 diabetes taking metformin who initiated a second diabetes medication (n = 50 861). The main exposure of interest was HbA 1c variability during a 3-year baseline period. HbA 1c variability, categorized into quartiles, was defined as standard deviation, coefficient of variation and adjusted standard deviation, which accounted for the number and mean number of days between HbA 1c tests. Cox proportional hazard models predicted mortality, hospitalization for ambulatory care-sensitive conditions, and myocardial infarction or stroke and were controlled for mean HbA 1c levels and the direction of change in HbA 1c levels during the baseline period. Over a mean 3.3 years of follow-up, all HbA 1c variability measures significantly predicted each outcome. Using the adjusted standard deviation measure for HbA 1c variability, the hazard ratios for the third and fourth quartile predicting mortality were 1.14 (95% CI 1.04, 1.25) and 1.42 (95% CI 1.28, 1.58), for myocardial infarction and stroke they were 1.25 (95% CI 1.10, 1.41) and 1.23 (95% CI 1.07, 1.42) and for ambulatory-care sensitive condition hospitalization they were 1.10 (95% CI 1.03, 1.18) and 1.11 (95% CI 1.03, 1.20). Higher baseline HbA 1c levels independently predicted the likelihood of each outcome. In veterans with Type 2 diabetes, greater HbA 1c variability was associated with an increased risk of adverse long-term outcomes, independently of HbA 1c levels and direction of change. Limiting HbA 1c fluctuations over time may reduce complications. © 2016 Diabetes UK.
Middle school transition and body weight outcomes: Evidence from Arkansas Public Schoolchildren.
Zeng, Di; Thomsen, Michael R; Nayga, Rodolfo M; Rouse, Heather L
2016-05-01
There is evidence that middle school transition adversely affects educational and psychological outcomes of pre-teen children, but little is known about the impacts of middle school transition on other aspects of health. In this article, we estimate the impact of middle school transition on the body mass index (BMI) of public schoolchildren in Arkansas, United States. Using an instrumental variable approach, we find that middle school transition in grade 6 led to a moderate decrease of 0.04 standard deviations in BMI z-scores for all students. Analysis by subsample indicated that this result was driven by boys (0.06-0.07 standard deviations) and especially by non-minority boys (0.09 standard deviations). We speculate that the changing levels of physical activities associated with middle school transition provide the most reasonable explanation for this result. Copyright © 2015 Elsevier B.V. All rights reserved.
On the linear relation between the mean and the standard deviation of a response time distribution.
Wagenmakers, Eric-Jan; Brown, Scott
2007-07-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.
Perturbed effects at radiation physics
NASA Astrophysics Data System (ADS)
Külahcı, Fatih; Şen, Zekâi
2013-09-01
Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gould, A.; Yee, J. C.; Pinsonneault, M. H.
The Galactic bulge source MOA-2010-BLG-523S exhibited short-term deviations from a standard microlensing light curve near the peak of an A {sub max} {approx} 265 high-magnification microlensing event. The deviations originally seemed consistent with expectations for a planetary companion to the principal lens. We combine long-term photometric monitoring with a previously published high-resolution spectrum taken near peak to demonstrate that this is an RS CVn variable, so that planetary microlensing is not required to explain the light-curve deviations. This is the first spectroscopically confirmed RS CVn star discovered in the Galactic bulge.
Climate change enhances interannual variability of the Nile river flow
NASA Astrophysics Data System (ADS)
Siam, Mohamed S.; Eltahir, Elfatih A. B.
2017-04-01
The human population living in the Nile basin countries is projected to double by 2050, approaching one billion. The increase in water demand associated with this burgeoning population will put significant stress on the available water resources. Potential changes in the flow of the Nile River as a result of climate change may further strain this critical situation. Here, we present empirical evidence from observations and consistent projections from climate model simulations suggesting that the standard deviation describing interannual variability of total Nile flow could increase by 50% (+/-35%) (multi-model ensemble mean +/- 1 standard deviation) in the twenty-first century compared to the twentieth century. We attribute the relatively large change in interannual variability of the Nile flow to projected increases in future occurrences of El Niño and La Niña events and to observed teleconnection between the El Niño-Southern Oscillation and Nile River flow. Adequacy of current water storage capacity and plans for additional storage capacity in the basin will need to be re-evaluated given the projected enhancement of interannual variability in the future flow of the Nile river.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Jones, Jeffrey A.
2010-01-01
We investigate the spatial variability of the normalized radar cross section of the surface (NRCS or Sigma(sup 0)) derived from measurements of the TRMM Precipitation Radar (PR) for the period from 1998 to 2009. The purpose of the study is to understand the way in which the sample standard deviation of the Sigma(sup 0) data changes as a function of spatial resolution, incidence angle, and surface type (land/ocean). The results have implications regarding the accuracy by which the path integrated attenuation from precipitation can be inferred by the use of surface scattering properties.
Family structure and childhood anthropometry in Saint Paul, Minnesota in 1918
Warren, John Robert
2017-01-01
Concern with childhood nutrition prompted numerous surveys of children’s growth in the United States after 1870. The Children’s Bureau’s 1918 “Weighing and Measuring Test” measured two million children to produce the first official American growth norms. Individual data for 14,000 children survives from the Saint Paul, Minnesota survey whose stature closely approximated national norms. As well as anthropometry the survey recorded exact ages, street address and full name. These variables allow linkage to the 1920 census to obtain demographic and socioeconomic information. We matched 72% of children to census families creating a sample of nearly 10,000 children. Children in the entire survey (linked set) averaged 0.74 (0.72) standard deviations below modern WHO height-for-age standards, and 0.48 (0.46) standard deviations below modern weight-for-age norms. Sibship size strongly influenced height-for-age, and had weaker influence on weight-for-age. Each additional child six or underreduced height-for-age scores by 0.07 standard deviations (95% CI: −0.03, 0.11). Teenage siblings had little effect on height-forage. Social class effects were substantial. Children of laborers averaged half a standard deviation shorter than children of professionals. Family structure and socio-economic status had compounding impacts on children’s stature. PMID:28943749
Verster, Joris C; Roth, Thomas
2014-01-01
The on-the-road driving test in normal traffic is used to examine the impact of drugs on driving performance. This paper compares the sensitivity of standard deviation of lateral position (SDLP) and SD speed in detecting driving impairment. A literature search was conducted to identify studies applying the on-the-road driving test, examining the effects of anxiolytics, antidepressants, antihistamines, and hypnotics. The proportion of comparisons (treatment versus placebo) where a significant impairment was detected with SDLP and SD speed was compared. About 40% of 53 relevant papers did not report data on SD speed and/or SDLP. After placebo administration, the correlation between SDLP and SD speed was significant but did not explain much variance (r = 0.253, p = 0.0001). A significant correlation was found between ΔSDLP and ΔSD speed (treatment-placebo), explaining 48% of variance. When using SDLP as outcome measure, 67 significant treatment-placebo comparisons were found. Only 17 (25.4%) were significant when SD speed was used as outcome measure. Alternatively, for five treatment-placebo comparisons, a significant difference was found for SD speed but not for SDLP. Standard deviation of lateral position is a more sensitive outcome measure to detect driving impairment than speed variability.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Li, Shengxu; Chen, Wei; Sun, Dianjianyi; Fernandez, Camilo; Li, Jian; Kelly, Tanika; He, Jiang; Krousel-Wood, Marie; Whelton, Paul K
2015-12-01
Body mass index (BMI) in childhood predicts obesity in adults, but it is unknown whether rapid increase and variability in BMI during childhood are independent predictors of adult obesity. The study cohort consisted of 1622 Bogalusa Heart Study participants (aged 20 to 51 years at follow-up) who had been screened at least four times during childhood (aged 4-19 years). BMI rate of change during childhood for each individual was assessed by mixed models; BMI residual standard deviation (RSD) during childhoodwas used as a measure of variability. The average follow-up period was 20.9 years. One standard deviation increase in rate of change in BMI during childhood was associated with 1.39 [95% confidence interval (CI): 1.17-1.61] kg/m(2) increase in adult BMI and 2.98 (95% CI: 2.42-3.56) cm increase in adult waist circumference, independently of childhood mean BMI. Similarly, one standard deviation increase in RSD in BMI during childhood was associated with 0.46 (95% CI: 0.23-0.69) kg/m(2) increase in adult BMI and 1.42 (95% CI: 0.82-2.02) cm increase in adult waist circumference. Odds ratio for adult obesity progressively increased from the lowest to the highest quartile of BMI rate of change or RSD during childhood (P for trend < 0.05 for both). Rapid increase and greater variability in BMI during childhood appear to be independent risk factors for adult obesity. Our findings have implications for understanding body weight regulation and obesity development from childhood to adulthood. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
ERIC Educational Resources Information Center
Fayombo, Grace Adebisi
2011-01-01
This study examined some student-related variables (interest in higher education, psychological resilience and study habit) as predictors of academic achievement among 131 (M (mean) = 28.17, SD (standard deviation) = 1.61) first year psychology students in the Introduction to Developmental Psychology class in UWI (The University of the West…
Silveira, L M; Basile-Filho, A; Nicolini, E A; Dessotte, C A M; Aguiar, G C S; Stabile, A M
2017-08-01
Sepsis is associated with morbidity and mortality, which implies high costs to the global health system. Metabolic alterations that increase glycaemia and glycaemic variability occur during sepsis. To verify mean body glucose levels and glycaemic variability in Intensive Care Unit (ICU) patients with severe sepsis or septic shock. Retrospective and exploratory study that involved collection of patients' sociodemographic and clinical data and calculation of severity scores. Glycaemia measurements helped to determine glycaemic variability through standard deviation and mean amplitude of glycaemic excursions. Analysis of 116 medical charts and 6730 glycaemia measurements revealed that the majority of patients were male and aged over 60 years. Surgical treatment was the main reason for ICU admission. High blood pressure and diabetes mellitus were the most usual comorbidities. Patients that died during the ICU stay presented the highest SOFA scores and mean glycaemia; they also experienced more hypoglycaemia events. Patients with diabetes had higher mean glycaemia, evaluated through standard deviation and mean amplitude of glycaemia excursions. Organic impairment at ICU admission may underlie glycaemic variability and lead to a less favourable outcome. High glycaemic variability in patients with diabetes indicates that monitoring of these individuals is crucial to ensure better outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Acoustic response variability in automotive vehicles
NASA Astrophysics Data System (ADS)
Hills, E.; Mace, B. R.; Ferguson, N. S.
2009-03-01
A statistical analysis of a series of measurements of the audio-frequency response of a large set of automotive vehicles is presented: a small hatchback model with both a three-door (411 vehicles) and five-door (403 vehicles) derivative and a mid-sized family five-door car (316 vehicles). The sets included vehicles of various specifications, engines, gearboxes, interior trim, wheels and tyres. The tests were performed in a hemianechoic chamber with the temperature and humidity recorded. Two tests were performed on each vehicle and the interior cabin noise measured. In the first, the excitation was acoustically induced by sets of external loudspeakers. In the second test, predominantly structure-borne noise was induced by running the vehicle at a steady speed on a rough roller. For both types of excitation, it is seen that the effects of temperature are small, indicating that manufacturing variability is larger than that due to temperature for the tests conducted. It is also observed that there are no significant outlying vehicles, i.e. there are at most only a few vehicles that consistently have the lowest or highest noise levels over the whole spectrum. For the acoustically excited tests, measured 1/3-octave noise reduction levels typically have a spread of 5 dB or so and the normalised standard deviation of the linear data is typically 0.1 or higher. Regarding the statistical distribution of the linear data, a lognormal distribution is a somewhat better fit than a Gaussian distribution for lower 1/3-octave bands, while the reverse is true at higher frequencies. For the distribution of the overall linear levels, a Gaussian distribution is generally the most representative. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the acoustically induced airborne cabin noise is best described by a Gaussian distribution with a normalised standard deviation between 0.09 and 0.145. There is generally considerable variability in the roller-induced noise, with individual 1/3-octave levels varying by typically 15 dB or so and with the normalised standard deviation being in the range 0.2-0.35 or more. These levels are strongly affected by wheel rim and tyre constructions. For vehicles with nominally identical wheel rims and tyres, the normalised standard deviation for 1/3-octave levels in the frequency range 40-600 Hz is 0.2 or so. The distribution of the linear roller-induced noise level in each 1/3-octave frequency band is well described by a lognormal distribution as is the overall level. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the roller-induced road noise is best described by a lognormal distribution with a normalised standard deviation of 0.2 or so, but that this can be significantly affected by the tyre and rim type, especially at lower frequencies.
Shi, Shaobo; Liu, Tao; Wang, Dandan; Zhang, Yan; Liang, Jinjun; Yang, Bo; Hu, Dan
2017-07-01
The goal of this study was to assess the effects of N-methyl-d-aspartate (NMDA) receptors activation on heart rate variability (HRV) and susceptibility to atrial fibrillation (AF). Rats were randomized for treatment with saline, NMDA (agonist of NMDA receptors), or NMDA plus MK-801 (antagonist of NMDA receptors) for 2 weeks. Heart rate variability was evaluated by using implantable electrocardiogram telemeters. Atrial fibrillation susceptibility was assessed with programmed stimulation in isolated hearts. Compared with the controls, the NMDA-treated rats displayed a decrease in the standard deviation of normal RR intervals, the standard deviation of the average RR intervals, the mean of the 5-min standard deviations of RR intervals, the root mean square of successive differences, and high frequency (HF); and an increase in low frequency (LF) and LF/HF (all P< 0.01). Additionally, the NMDA-treated rats showed prolonged activation latency and reduced effective refractory period (all P< 0.01). Importantly, AF was induced in all NMDA-treated rats. While atrial fibrosis developed, connexin40 downgraded and metalloproteinase 9 upgraded in the NMDA-treated rats (all P< 0.01). Most of the above alterations were mitigated by co-administering with MK-801. These results indicate that NMDA receptors activation reduces HRV and enhances AF inducibility, with cardiac autonomic imbalance, atrial fibrosis, and degradation of gap junction protein identified as potential mechanistic contributors. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Chaintreau, Alain; Fieber, Wolfgang; Sommer, Horst; Gilbert, Alexis; Yamada, Keita; Yoshida, Naohiro; Pagelot, Alain; Moskau, Detlef; Moreno, Aitor; Schleucher, Jürgen; Reniero, Fabiano; Holland, Margaret; Guillou, Claude; Silvestre, Virginie; Akoka, Serge; Remaud, Gérald S
2013-07-25
Isotopic (13)C NMR spectrometry, which is able to measure intra-molecular (13)C composition, is of emerging demand because of the new information provided by the (13)C site-specific content of a given molecule. A systematic evaluation of instrumental behaviour is of importance to envisage isotopic (13)C NMR as a routine tool. This paper describes the first collaborative study of intra-molecular (13)C composition by NMR. The main goals of the ring test were to establish intra- and inter-variability of the spectrometer response. Eight instruments with different configuration were retained for the exercise on the basis of a qualification test. Reproducibility at the natural abundance of isotopic (13)C NMR was then assessed on vanillin from three different origins associated with specific δ (13)Ci profiles. The standard deviation was, on average, between 0.9 and 1.2‰ for intra-variability. The highest standard deviation for inter-variability was 2.1‰. This is significantly higher than the internal precision but could be considered good in respect of a first ring test on a new analytical method. The standard deviation of δ (13)Ci in vanillin was not homogeneous over the eight carbons, with no trend either for the carbon position or for the configuration of the spectrometer. However, since the repeatability for each instrument was satisfactory, correction factors for each carbon in vanillin could be calculated to harmonize the results. Copyright © 2013 Elsevier B.V. All rights reserved.
Levegrün, Sabine; Pöttgen, Christoph; Jawad, Jehad Abu; Berkovic, Katharina; Hepp, Rodrigo; Stuschke, Martin
2013-02-01
To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after ≥4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determined at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
An estimator for the standard deviation of a natural frequency. II.
NASA Technical Reports Server (NTRS)
Schiff, A. J.; Bogdanoff, J. L.
1971-01-01
A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
Seay, Joseph F.; Gregorczyk, Karen N.; Hasselquist, Leif
2016-01-01
Abstract Influences of load carriage and inclination on spatiotemporal parameters were examined during treadmill and overground walking. Ten soldiers walked on a treadmill and overground with three load conditions (00 kg, 20 kg, 40 kg) during level, uphill (6% grade) and downhill (-6% grade) inclinations at self-selected speed, which was constant across conditions. Mean values and standard deviations for double support percentage, stride length and a step rate were compared across conditions. Double support percentage increased with load and inclination change from uphill to level walking, with a 0.4% stance greater increase at the 20 kg condition compared to 00 kg. As inclination changed from uphill to downhill, the step rate increased more overground (4.3 ± 3.5 steps/min) than during treadmill walking (1.7 ± 2.3 steps/min). For the 40 kg condition, the standard deviations were larger than the 00 kg condition for both the step rate and double support percentage. There was no change between modes for step rate standard deviation. For overground compared to treadmill walking, standard deviation for stride length and double support percentage increased and decreased, respectively. Changes in the load of up to 40 kg, inclination of 6% grade away from the level (i.e., uphill or downhill) and mode (treadmill and overground) produced small, yet statistically significant changes in spatiotemporal parameters. Variability, as assessed by standard deviation, was not systematically lower during treadmill walking compared to overground walking. Due to the small magnitude of changes, treadmill walking appears to replicate the spatiotemporal parameters of overground walking. PMID:28149338
Hosseininasab, Abufazel; Mohammadi, Mohammadreza; Jouzi, Samira; Esmaeilinasab, Maryam; Delavar, Ali
2016-01-01
Objective: This study aimed to provide a normative study documenting how 114 five-seven year-old non-patient Iranian children respond to the Rorschach test. We compared this especial sample to international normative reference values for the Comprehensive System (CS). Method: One hundred fourteen 5- 7- year-old non-patient Iranian children were recruited from public schools. Using five child and adolescent samples from five countries, we compared Iranian Normative Reference Data- based on reference means and standard deviations for each sample. Results: Findings revealed that how the scores in each sample were distributed and how the samples were compared across variables in eight Rorschach Comprehensive System (CS) clusters. We reported all descriptive statistics such as reference mean and standard deviation for all variables. Conclusion: Iranian clinicians could rely on country specific or “local norms” when assessing children. We discourage Iranian clinicians to use many CS scores to make nomothetic, score-based inferences about psychopathology in children and adolescents. PMID:27928247
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levegruen, Sabine, E-mail: sabine.levegruen@uni-due.de; Poettgen, Christoph; Abu Jawad, Jehad
Purpose: To evaluate megavoltage computed tomography (MVCT)-based image guidance with helical tomotherapy in patients with vertebral tumors by analyzing factors influencing interobserver variability, considered as quality criterion of image guidance. Methods and Materials: Five radiation oncologists retrospectively registered 103 MVCTs in 10 patients to planning kilovoltage CTs by rigid transformations in 4 df. Interobserver variabilities were quantified using the standard deviations (SDs) of the distributions of the correction vector components about the observers' fraction mean. To assess intraobserver variabilities, registrations were repeated after {>=}4 weeks. Residual deviations after setup correction due to uncorrectable rotational errors and elastic deformations were determinedmore » at 3 craniocaudal target positions. To differentiate observer-related variations in minimizing these residual deviations across the 3-dimensional MVCT from image resolution effects, 2-dimensional registrations were performed in 30 single transverse and sagittal MVCT slices. Axial and longitudinal MVCT image resolutions were quantified. For comparison, image resolution of kilovoltage cone-beam CTs (CBCTs) and interobserver variability in registrations of 43 CBCTs were determined. Results: Axial MVCT image resolution is 3.9 lp/cm. Longitudinal MVCT resolution amounts to 6.3 mm, assessed as full-width at half-maximum of thin objects in MVCTs with finest pitch. Longitudinal CBCT resolution is better (full-width at half-maximum, 2.5 mm for CBCTs with 1-mm slices). In MVCT registrations, interobserver variability in the craniocaudal direction (SD 1.23 mm) is significantly larger than in the lateral and ventrodorsal directions (SD 0.84 and 0.91 mm, respectively) and significantly larger compared with CBCT alignments (SD 1.04 mm). Intraobserver variabilities are significantly smaller than corresponding interobserver variabilities (variance ratio [VR] 1.8-3.1). Compared with 3-dimensional registrations, 2-dimensional registrations have significantly smaller interobserver variability in the lateral and ventrodorsal directions (VR 3.8 and 2.8, respectively) but not in the craniocaudal direction (VR 0.75). Conclusion: Tomotherapy image guidance precision is affected by image resolution and residual deviations after setup correction. Eliminating the effect of residual deviations yields small interobserver variabilities with submillimeter precision in the axial plane. In contrast, interobserver variability in the craniocaudal direction is dominated by the poorer longitudinal MVCT image resolution. Residual deviations after image guidance exist and need to be considered when dose gradients ultimately achievable with image guided radiation therapy techniques are analyzed.« less
The repeatability of mean defect with size III and size V standard automated perimetry.
Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A
2013-02-15
The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.
ERIC Educational Resources Information Center
Bodner, Todd E.
2016-01-01
This article revisits how the end points of plotted line segments should be selected when graphing interactions involving a continuous target predictor variable. Under the standard approach, end points are chosen at ±1 or 2 standard deviations from the target predictor mean. However, when the target predictor and moderator are correlated or the…
Quantifying expert diagnosis variability when grading tumor-infiltrating lymphocytes
NASA Astrophysics Data System (ADS)
Toro, Paula; Corredor, Germán.; Wang, Xiangxue; Arias, Viviana; Velcheti, Vamsidhar; Madabhushi, Anant; Romero, Eduardo
2017-11-01
Tumor-infiltrating lymphocytes (TILs) have proved to play an important role in predicting prognosis, survival, and response to treatment in patients with a variety of solid tumors. Unfortunately, currently, there are not a standardized methodology to quantify the infiltration grade. The aim of this work is to evaluate variability among the reports of TILs given by a group of pathologists who examined a set of digitized Non-Small Cell Lung Cancer samples (n=60). 28 pathologists answered a different number of histopathological images. The agreement among pathologists was evaluated by computing the Kappa index coefficient and the standard deviation of their estimations. Furthermore, TILs reports were correlated with patient's prognosis and survival using the Pearson's correlation coefficient. General results show that the agreement among experts grading TILs in the dataset is low since Kappa values remain below 0.4 and the standard deviation values demonstrate that in none of the images there was a full consensus. Finally, the correlation coefficient for each pathologist also reveals a low association between the pathologists' predictions and the prognosis/survival data. Results suggest the need of defining standardized, objective, and effective strategies to evaluate TILs, so they could be used as a biomarker in the daily routine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, D; Meier, J; Mawlawi, O
Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.
2012-09-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Huh, S.; Dickey, D.A.; Meador, M.R.; Ruhl, K.E.
2005-01-01
A temporal analysis of the number and duration of exceedences of high- and low-flow thresholds was conducted to determine the number of years required to detect a level shift using data from Virginia, North Carolina, and South Carolina. Two methods were used - ordinary least squares assuming a known error variance and generalized least squares without a known error variance. Using ordinary least squares, the mean number of years required to detect a one standard deviation level shift in measures of low-flow variability was 57.2 (28.6 on either side of the break), compared to 40.0 years for measures of high-flow variability. These means become 57.6 and 41.6 when generalized least squares is used. No significant relations between years and elevation or drainage area were detected (P>0.05). Cluster analysis did not suggest geographic patterns in years related to physiography or major hydrologic regions. Referring to the number of observations required to detect a one standard deviation shift as 'characterizing' the variability, it appears that at least 20 years of record on either side of a shift may be necessary to adequately characterize high-flow variability. A longer streamflow record (about 30 years on either side) may be required to characterize low-flow variability. ?? 2005 Elsevier B.V. All rights reserved.
[Effect strength variation in the single group pre-post study design: a critical review].
Maier-Riehle, B; Zwingmann, C
2000-08-01
In Germany, studies in rehabilitation research--in particular evaluation studies and examinations of quality of outcome--have so far mostly been executed according to the uncontrolled one-group pre-post design. Assessment of outcome is usually made by comparing the pre- and post-treatment means of the outcome variables. The pre-post differences are checked, and in case of significance, the results are increasingly presented in form of effect sizes. For this reason, this contribution presents different effect size indices used for the one-group pre-post design--in spite of fundamental doubts which exist in relation to that design due to its limited internal validity. The numerator concerning all effect size indices of the one-group pre-post design is defined as difference between the pre- and post-treatment means, whereas there are different possibilities and recommendations with regard to the denominator and hence the standard deviation that serves as the basis for standardizing the difference of the means. Used above all are standardization oriented towards the standard deviation of the pre-treatment scores, standardization oriented towards the pooled standard deviation of the pre- and post-treatment scores, and standardization oriented towards the standard deviation of the pre-post differences. Two examples are given to demonstrate that the different modes of calculating effect size indices in the one-group pre-post design may lead to very different outcome patterns. Additionally, it is pointed out that effect sizes from the uncontrolled one-group pre-post design generally tend to be higher than effect sizes from studies conducted with control groups. Finally, the pros and cons of the different effect size indices are discussed and recommendations are given.
Patel, Sanjay R.; Weng, Jia; Rueschman, Michael; Dudley, Katherine A.; Loredo, Jose S.; Mossavar-Rahmani, Yasmin; Ramirez, Maricelle; Ramos, Alberto R.; Reid, Kathryn; Seiger, Ashley N.; Sotres-Alvarez, Daniela; Zee, Phyllis C.; Wang, Rui
2015-01-01
Study Objectives: While actigraphy is considered objective, the process of setting rest intervals to calculate sleep variables is subjective. We sought to evaluate the reproducibility of actigraphy-derived measures of sleep using a standardized algorithm for setting rest intervals. Design: Observational study. Setting: Community-based. Participants: A random sample of 50 adults aged 18–64 years free of severe sleep apnea participating in the Sueño sleep ancillary study to the Hispanic Community Health Study/Study of Latinos. Interventions: N/A. Measurements and Results: Participants underwent 7 days of continuous wrist actigraphy and completed daily sleep diaries. Studies were scored twice by each of two scorers. Rest intervals were set using a standardized hierarchical approach based on event marker, diary, light, and activity data. Sleep/wake status was then determined for each 30-sec epoch using a validated algorithm, and this was used to generate 11 variables: mean nightly sleep duration, nap duration, 24-h sleep duration, sleep latency, sleep maintenance efficiency, sleep fragmentation index, sleep onset time, sleep offset time, sleep midpoint time, standard deviation of sleep duration, and standard deviation of sleep midpoint. Intra-scorer intraclass correlation coefficients (ICCs) were high, ranging from 0.911 to 0.995 across all 11 variables. Similarly, inter-scorer ICCs were high, also ranging from 0.911 to 0.995, and mean inter-scorer differences were small. Bland-Altman plots did not reveal any systematic disagreement in scoring. Conclusions: With use of a standardized algorithm to set rest intervals, scoring of actigraphy for the purpose of generating a wide array of sleep variables is highly reproducible. Citation: Patel SR, Weng J, Rueschman M, Dudley KA, Loredo JS, Mossavar-Rahmani Y, Ramirez M, Ramos AR, Reid K, Seiger AN, Sotres-Alvarez D, Zee PC, Wang R. Reproducibility of a standardized actigraphy scoring algorithm for sleep in a US Hispanic/Latino population. SLEEP 2015;38(9):1497–1503. PMID:25845697
Are Study and Journal Characteristics Reliable Indicators of "Truth" in Imaging Research?
Frank, Robert A; McInnes, Matthew D F; Levine, Deborah; Kressel, Herbert Y; Jesurum, Julia S; Petrcich, William; McGrath, Trevor A; Bossuyt, Patrick M
2018-04-01
Purpose To evaluate whether journal-level variables (impact factor, cited half-life, and Standards for Reporting of Diagnostic Accuracy Studies [STARD] endorsement) and study-level variables (citation rate, timing of publication, and order of publication) are associated with the distance between primary study results and summary estimates from meta-analyses. Materials and Methods MEDLINE was searched for meta-analyses of imaging diagnostic accuracy studies, published from January 2005 to April 2016. Data on journal-level and primary-study variables were extracted for each meta-analysis. Primary studies were dichotomized by variable as first versus subsequent publication, publication before versus after STARD introduction, STARD endorsement, or by median split. The mean absolute deviation of primary study estimates from the corresponding summary estimates for sensitivity and specificity was compared between groups. Means and confidence intervals were obtained by using bootstrap resampling; P values were calculated by using a t test. Results Ninety-eight meta-analyses summarizing 1458 primary studies met the inclusion criteria. There was substantial variability, but no significant differences, in deviations from the summary estimate between paired groups (P > .0041 in all comparisons). The largest difference found was in mean deviation for sensitivity, which was observed for publication timing, where studies published first on a topic demonstrated a mean deviation that was 2.5 percentage points smaller than subsequently published studies (P = .005). For journal-level factors, the greatest difference found (1.8 percentage points; P = .088) was in mean deviation for sensitivity in journals with impact factors above the median compared with those below the median. Conclusion Journal- and study-level variables considered important when evaluating diagnostic accuracy information to guide clinical decisions are not systematically associated with distance from the truth; critical appraisal of individual articles is recommended. © RSNA, 2017 Online supplemental material is available for this article.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
Spatial and temporal variability in forest-atmosphere CO2 exchange
D.Y. Hollinger; J. Aber; B. Dail; E.A. Davidson; S.M. Goltz; et al.
2004-01-01
Seven years of carbon dioxide flux measurements indicate that a ∼ 90-year-old spruce dominated forest in Maine, USA, has been sequestering 174±46 gCm-2 yr-1 (mean±1 standard deviation, nocturnal friction velocity (u*) threshold >0.25ms-1...
Children's Use of the Prosodic Characteristics of Infant-Directed Speech.
ERIC Educational Resources Information Center
Weppelman, Tammy L.; Bostow, Angela; Schiffer, Ryan; Elbert-Perez, Evelyn; Newman, Rochelle S.
2003-01-01
Examined whether young children (4 years of age) show prosodic changes when speaking to infants. Measured children's word duration in infant-directed speech compared to adult-directed speech, examined amplitude variability, and examined both average fundamental frequency and fundamental frequency standard deviation. Results indicate that…
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen
2014-01-01
very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.
Williams, Rachel E; Arabi, Mazdak; Loftis, Jim; Elmund, G Keith
2014-09-01
Implementation of numeric nutrient standards in Colorado has prompted a need for greater understanding of human impacts on ambient nutrient levels. This study explored the variability of annual nutrient concentrations due to upstream anthropogenic influences and developed a mathematical expression for the number of samples required to estimate median concentrations for standard compliance. A procedure grounded in statistical hypothesis testing was developed to estimate the number of annual samples required at monitoring locations while taking into account the difference between the median concentrations and the water quality standard for a lognormal population. For the Cache La Poudre River in northern Colorado, the relationship between the median and standard deviation of total N (TN) and total P (TP) concentrations and the upstream point and nonpoint concentrations and general hydrologic descriptors was explored using multiple linear regression models. Very strong relationships were evident between the upstream anthropogenic influences and annual medians for TN and TP ( > 0.85, < 0.001) and corresponding standard deviations ( > 0.7, < 0.001). Sample sizes required to demonstrate (non)compliance with the standard depend on the measured water quality conditions. When the median concentration differs from the standard by >20%, few samples are needed to reach a 95% confidence level. When the median is within 20% of the corresponding water quality standard, however, the required sample size increases rapidly, and hundreds of samples may be required. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
Global atmospheric carbon budget: results from an ensemble of atmospheric CO2 inversions
NASA Astrophysics Data System (ADS)
Peylin, P.; Law, R. M.; Gurney, K. R.; Chevallier, F.; Jacobson, A. R.; Maki, T.; Niwa, Y.; Patra, P. K.; Peters, W.; Rayner, P. J.; Rödenbeck, C.; van der Laan-Luijkx, I. T.; Zhang, X.
2013-10-01
Atmospheric CO2 inversions estimate surface carbon fluxes from an optimal fit to atmospheric CO2 measurements, usually including prior constraints on the flux estimates. Eleven sets of carbon flux estimates are compared, generated by different inversions systems that vary in their inversions methods, choice of atmospheric data, transport model and prior information. The inversions were run for at least 5 yr in the period between 1990 and 2010. Mean fluxes for 2001-2004, seasonal cycles, interannual variability and trends are compared for the tropics and northern and southern extra-tropics, and separately for land and ocean. Some continental/basin-scale subdivisions are also considered where the atmospheric network is denser. Four-year mean fluxes are reasonably consistent across inversions at global/latitudinal scale, with a large total (land plus ocean) carbon uptake in the north (-3.4 Pg C yr-1 (±0.5 Pg C yr-1 standard deviation), with slightly more uptake over land than over ocean), a significant although more variable source over the tropics (1.6 ± 0.9 Pg C yr-1) and a compensatory sink of similar magnitude in the south (-1.4 ± 0.5 Pg C yr-1) corresponding mainly to an ocean sink. Largest differences across inversions occur in the balance between tropical land sources and southern land sinks. Interannual variability (IAV) in carbon fluxes is larger for land than ocean regions (standard deviation around 1.06 versus 0.33 Pg C yr-1 for the 1996-2007 period), with much higher consistency among the inversions for the land. While the tropical land explains most of the IAV (standard deviation ~ 0.65 Pg C yr-1), the northern and southern land also contribute (standard deviation ~ 0.39 Pg C yr-1). Most inversions tend to indicate an increase of the northern land carbon uptake from late 1990s to 2008 (around 0.1 Pg C yr-1, predominantly in North Asia. The mean seasonal cycle appears to be well constrained by the atmospheric data over the northern land (at the continental scale), but still highly dependent on the prior flux seasonality over the ocean. Finally we provide recommendations to interpret the regional fluxes, along with the uncertainty estimates.
Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2018-01-01
Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512
Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2017-09-01
Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. The AL methods: produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p=0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275-0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers' different models during the training phase, compared to the variance of the induced models' AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p=0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p=0.29), as was the difference between the Combination_XA and Exploitation methods (p=0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p=0.014), but not when using any of the three AL methods. The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group's individual labelers. Finally, using the AL methods: when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. Copyright © 2017 Elsevier B.V. All rights reserved.
The Effect of Lung Volume on Selected Phonatory and Articulatory Variables.
ERIC Educational Resources Information Center
Dromey, Christopher; Ramig, Lorraine Olson
1998-01-01
This study examined effects of manipulating lung volume on phonatory and articulatory kinematic behavior during sentence production in ten healthy adults. Significant differences at different lung volume levels were found for sound pressure level, fundamental frequency, semitone standard deviation, and upper and lower lip displacements and peak…
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
Ishibashi, Hiroki; Takano, Masashi; Sasa, Hidenori; Furuya, Kenichi
2016-01-01
Background Placenta previa, one of the most severe obstetric complications, carries an increased risk of intraoperative massive hemorrhage. Several risk factors for intraoperative hemorrhage have been identified to date. However, the correlation between birth weight and intraoperative hemorrhage has not been investigated. Here we estimate the correlation between birth weight and the occurrence of intraoperative massive hemorrhage in placenta previa. Materials and Methods We included all 256 singleton pregnancies delivered via cesarean section at our hospital because of placenta previa between 2003 and 2015. We calculated not only measured birth weights but also standard deviation values according to the Japanese standard growth curve to adjust for differences in gestational age. We assessed the correlation between birth weight and the occurrence of intraoperative massive hemorrhage (>1500 mL blood loss). Receiver operating characteristic curves were constructed to determine the cutoff value of intraoperative massive hemorrhage. Results Of 256 pregnant women with placenta previa, 96 (38%) developed intraoperative massive hemorrhage. Receiver-operating characteristic curves revealed that the area under the curve of the combination variables between the standard deviation of birth weight and intraoperative massive hemorrhage was 0.71. The cutoff value with a sensitivity of 81.3% and specificity of 55.6% was −0.33 standard deviation. The multivariate analysis revealed that a standard deviation of >−0.33 (odds ratio, 5.88; 95% confidence interval, 3.04–12.00), need for hemostatic procedures (odds ratio, 3.31; 95% confidence interval, 1.79–6.25), and placental adhesion (odds ratio, 12.68; 95% confidence interval, 2.85–92.13) were independent risk of intraoperative massive hemorrhage. Conclusion In patients with placenta previa, a birth weight >−0.33 standard deviation was a significant risk indicator of massive hemorrhage during cesarean section. Based on this result, further studies are required to investigate whether fetal weight estimated by ultrasonography can predict hemorrhage during cesarean section in patients with placental previa. PMID:27902772
Soyama, Hiroaki; Miyamoto, Morikazu; Ishibashi, Hiroki; Takano, Masashi; Sasa, Hidenori; Furuya, Kenichi
2016-01-01
Placenta previa, one of the most severe obstetric complications, carries an increased risk of intraoperative massive hemorrhage. Several risk factors for intraoperative hemorrhage have been identified to date. However, the correlation between birth weight and intraoperative hemorrhage has not been investigated. Here we estimate the correlation between birth weight and the occurrence of intraoperative massive hemorrhage in placenta previa. We included all 256 singleton pregnancies delivered via cesarean section at our hospital because of placenta previa between 2003 and 2015. We calculated not only measured birth weights but also standard deviation values according to the Japanese standard growth curve to adjust for differences in gestational age. We assessed the correlation between birth weight and the occurrence of intraoperative massive hemorrhage (>1500 mL blood loss). Receiver operating characteristic curves were constructed to determine the cutoff value of intraoperative massive hemorrhage. Of 256 pregnant women with placenta previa, 96 (38%) developed intraoperative massive hemorrhage. Receiver-operating characteristic curves revealed that the area under the curve of the combination variables between the standard deviation of birth weight and intraoperative massive hemorrhage was 0.71. The cutoff value with a sensitivity of 81.3% and specificity of 55.6% was -0.33 standard deviation. The multivariate analysis revealed that a standard deviation of >-0.33 (odds ratio, 5.88; 95% confidence interval, 3.04-12.00), need for hemostatic procedures (odds ratio, 3.31; 95% confidence interval, 1.79-6.25), and placental adhesion (odds ratio, 12.68; 95% confidence interval, 2.85-92.13) were independent risk of intraoperative massive hemorrhage. In patients with placenta previa, a birth weight >-0.33 standard deviation was a significant risk indicator of massive hemorrhage during cesarean section. Based on this result, further studies are required to investigate whether fetal weight estimated by ultrasonography can predict hemorrhage during cesarean section in patients with placental previa.
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bellouin, N.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Ma, X.; Myhre, G.; Penner, J. E.; Randles, C. A.; Samset, B.; Schulz, M.; Takemura, T.; Yu, F.; Yu, H.; Zhou, C.
2013-03-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.47 Wm-2 and the inter-model standard deviation is 0.55 Wm-2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm-2, and the standard deviation increases to 1.01 W-2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45 Wm-2 (8%) clear-sky and 0.62 Wm-2 (11%) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Insights from analysis for harmful and potentially harmful constituents (HPHCs) in tobacco products.
Oldham, Michael J; DeSoi, Darren J; Rimmer, Lonnie T; Wagner, Karl A; Morton, Michael J
2014-10-01
A total of 20 commercial cigarette and 16 commercial smokeless tobacco products were assayed for 96 compounds listed as harmful and potentially harmful constituents (HPHCs) by the US Food and Drug Administration. For each product, a single lot was used for all testing. Both International Organization for Standardization and Health Canada smoking regimens were used for cigarette testing. For those HPHCs detected, measured levels were consistent with levels reported in the literature, however substantial assay variability (measured as average relative standard deviation) was found for most results. Using an abbreviated list of HPHCs, statistically significant differences for most of these HPHCs occurred when results were obtained 4-6months apart (i.e., temporal variability). The assay variability and temporal variability demonstrate the need for standardized analytical methods with defined repeatability and reproducibility for each HPHC using certified reference standards. Temporal variability also means that simple conventional comparisons, such as two-sample t-tests, are inappropriate for comparing products tested at different points in time from the same laboratory or from different laboratories. Until capable laboratories use standardized assays with established repeatability, reproducibility, and certified reference standards, the resulting HPHC data will be unreliable for product comparisons or other decision making in regulatory science. Copyright © 2014 Elsevier Inc. All rights reserved.
Quantitative image feature variability amongst CT scanners with a controlled scan protocol
NASA Astrophysics Data System (ADS)
Ger, Rachel B.; Zhou, Shouhao; Chi, Pai-Chun Melinda; Goff, David L.; Zhang, Lifei; Lee, Hannah J.; Fuller, Clifton D.; Howell, Rebecca M.; Li, Heng; Stafford, R. Jason; Court, Laurence E.; Mackin, Dennis S.
2018-02-01
Radiomics studies often analyze patient computed tomography (CT) images acquired from different CT scanners. This may result in differences in imaging parameters, e.g. different manufacturers, different acquisition protocols, etc. However, quantifiable differences in radiomics features can occur based on acquisition parameters. A controlled protocol may allow for minimization of these effects, thus allowing for larger patient cohorts from many different CT scanners. In order to test radiomics feature variability across different CT scanners a radiomics phantom was developed with six different cartridges encased in high density polystyrene. A harmonized protocol was developed to control for tube voltage, tube current, scan type, pitch, CTDIvol, convolution kernel, display field of view, and slice thickness across different manufacturers. The radiomics phantom was imaged on 18 scanners using the control protocol. A linear mixed effects model was created to assess the impact of inter-scanner variability with decomposition of feature variation between scanners and cartridge materials. The inter-scanner variability was compared to the residual variability (the unexplained variability) and to the inter-patient variability using two different patient cohorts. The patient cohorts consisted of 20 non-small cell lung cancer (NSCLC) and 30 head and neck squamous cell carcinoma (HNSCC) patients. The inter-scanner standard deviation was at least half of the residual standard deviation for 36 of 49 quantitative image features. The ratio of inter-scanner to patient coefficient of variation was above 0.2 for 22 and 28 of the 49 features for NSCLC and HNSCC patients, respectively. Inter-scanner variability was a significant factor compared to patient variation in this small study for many of the features. Further analysis with a larger cohort will allow more thorough analysis with additional variables in the model to truly isolate the interscanner difference.
Floodplain complexity and surface metrics: influences of scale and geomorphology
Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.
2015-01-01
Many studies of fluvial geomorphology and landscape ecology examine a single river or landscape, thus lack generality, making it difficult to develop a general understanding of the linkages between landscape patterns and larger-scale driving variables. We examined the spatial complexity of eight floodplain surfaces in widely different geographic settings and determined how patterns measured at different scales relate to different environmental drivers. Floodplain surface complexity is defined as having highly variable surface conditions that are also highly organised in space. These two components of floodplain surface complexity were measured across multiple sampling scales from LiDAR-derived DEMs. The surface character and variability of each floodplain were measured using four surface metrics; namely, standard deviation, skewness, coefficient of variation, and standard deviation of curvature from a series of moving window analyses ranging from 50 to 1000 m in radius. The spatial organisation of each floodplain surface was measured using spatial correlograms of the four surface metrics. Surface character, variability, and spatial organisation differed among the eight floodplains; and random, fragmented, highly patchy, and simple gradient spatial patterns were exhibited, depending upon the metric and window size. Differences in surface character and variability among the floodplains became statistically stronger with increasing sampling scale (window size), as did their associations with environmental variables. Sediment yield was consistently associated with differences in surface character and variability, as were flow discharge and variability at smaller sampling scales. Floodplain width was associated with differences in the spatial organization of surface conditions at smaller sampling scales, while valley slope was weakly associated with differences in spatial organisation at larger scales. A comparison of floodplain landscape patterns measured at different scales would improve our understanding of the role that different environmental variables play at different scales and in different geomorphic settings.
Period variability of coupled noisy oscillators
NASA Astrophysics Data System (ADS)
Mori, Fumito; Kori, Hiroshi
2013-03-01
Period variability, quantified by the standard deviation (SD) of the cycle-to-cycle period, is investigated for noisy phase oscillators. We define the checkpoint phase as the beginning or end point of one oscillation cycle and derive an expression for the SD as a function of this phase. We find that the SD is dependent on the checkpoint phase only when oscillators are coupled. The applicability of our theory is verified using a realistic model. Our work clarifies the relationship between period variability and synchronization from which valuable information regarding coupling can be inferred.
The use of heart rate variability in assessing precompetitive stress in high-standard judo athletes.
Morales, J; Garcia, V; García-Massó, X; Salvá, P; Escobar, R; Buscà, B
2013-02-01
The objective of this study is to examine the sensitivity to and changes in heart rate variability (HRV) in stressful situations before judo competitions and to observe the differences among judo athletes according to their competitive standards in both official and unofficial competitions. 24 (10 male and 14 female) national- and international-standard athletes were evaluated. Each participant answered the Revised Competitive State Anxiety Inventory (CSAI-2R) and their HRV was recorded both during an official and unofficial competition. The MANOVA showed significant main effects of the athlete's standard and the type of competition in CSAI-2R, in HRV time domain, in HRV frequency domain and in HRV nonlinear analysis (p<0.05). International-standard judo athletes have lower somatic anxiety, cognitive anxiety, heart rate and low-high frequency ratio than national-standard athletes (p<0.05). International-standard athletes have a higher confidence, mean RR interval, standard deviation of RR, square root of the mean squared difference of successive RR intervals, number of consecutive RR that differ by more than 5 ms, short-term variability, long-term variability, long-range scaling exponents and short-range scaling exponent than national-standard judo athletes. In conclusion, international-standard athletes show less pre-competitive anxiety than the national-standard athletes and HRV analysis is sensitive to changes in pre-competitive anxiety. © Georg Thieme Verlag KG Stuttgart · New York.
Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars
2015-10-01
A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.
2017-12-01
Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.
NASA Astrophysics Data System (ADS)
Moreno de Castro, Maria; Schartau, Markus; Wirtz, Kai
2017-04-01
Mesocosm experiments on phytoplankton dynamics under high CO2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
Du, Han; Wang, Lijuan
2018-04-23
Intraindividual variability can be measured by the intraindividual standard deviation ([Formula: see text]), intraindividual variance ([Formula: see text]), estimated hth-order autocorrelation coefficient ([Formula: see text]), and mean square successive difference ([Formula: see text]). Unresolved issues exist in the research on reliabilities of intraindividual variability indicators: (1) previous research only studied conditions with 0 autocorrelations in the longitudinal responses; (2) the reliabilities of [Formula: see text] and [Formula: see text] have not been studied. The current study investigates reliabilities of [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and the intraindividual mean, with autocorrelated longitudinal data. Reliability estimates of the indicators were obtained through Monte Carlo simulations. The impact of influential factors on reliabilities of the intraindividual variability indicators is summarized, and the reliabilities are compared across the indicators. Generally, all the studied indicators of intraindividual variability were more reliable with a more reliable measurement scale and more assessments. The reliabilities of [Formula: see text] were generally lower than those of [Formula: see text] and [Formula: see text], the reliabilities of [Formula: see text] were usually between those of [Formula: see text] and [Formula: see text] unless the scale reliability was large and/or the interindividual standard deviation in autocorrelation coefficients was large, and the reliabilities of the intraindividual mean were generally the highest. An R function is provided for planning longitudinal studies to ensure sufficient reliabilities of the intraindividual indicators are achieved.
Phase-I monitoring of standard deviations in multistage linear profiles
NASA Astrophysics Data System (ADS)
Kalaei, Mahdiyeh; Soleimani, Paria; Niaki, Seyed Taghi Akhavan; Atashgar, Karim
2018-03-01
In most modern manufacturing systems, products are often the output of some multistage processes. In these processes, the stages are dependent on each other, where the output quality of each stage depends also on the output quality of the previous stages. This property is called the cascade property. Although there are many studies in multistage process monitoring, there are fewer works on profile monitoring in multistage processes, especially on the variability monitoring of a multistage profile in Phase-I for which no research is found in the literature. In this paper, a new methodology is proposed to monitor the standard deviation involved in a simple linear profile designed in Phase I to monitor multistage processes with the cascade property. To this aim, an autoregressive correlation model between the stages is considered first. Then, the effect of the cascade property on the performances of three types of T 2 control charts in Phase I with shifts in standard deviation is investigated. As we show that this effect is significant, a U statistic is next used to remove the cascade effect, based on which the investigated control charts are modified. Simulation studies reveal good performances of the modified control charts.
7 CFR 400.204 - Notification of deviation from standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
Risk Factors in Early Child Development: Is Prenatal Cocaine/Polydrug Exposure a Key Variable?
ERIC Educational Resources Information Center
Phelps, Leadelle; Wallace, Nancy Virginia; Bontrager, Annie
1997-01-01
Assessed the effect of cocaine/polydrug in utero exposure on early childhood development while controlling for covariant factors. Analysis of two matched samples of preschoolers (20 with drug exposure and 20 without) revealed that both groups scored approximately one standard deviation below the expected mean in social skills, auditory…
Rating Slam Dunks to Visualize the Mean, Median, Mode, Range, and Standard Deviation
ERIC Educational Resources Information Center
Robinson, Nick W.; Castle Bell, Gina
2014-01-01
Among the many difficulties beleaguering the communication research methods instructor is the problem of contextualizing abstract ideas. Comprehension of variable operationalization, the utility of the measures of central tendency, measures of dispersion, and the visual distribution of data sets are difficult, since students have not handled data.…
Valuing a More Rigorous Review of Formative Assessment's Effectiveness
ERIC Educational Resources Information Center
Apthorp, Helen; Klute, Mary; Petrites, Tony; Harlacher, Jason; Real, Marianne
2016-01-01
Prior reviews of evidence for the impact of formative assessment on student achievement suggest widely different estimates of formative assessment's effectiveness, ranging from 0.40 and 0.70 standard deviations in one review. The purpose of this study is to describe variability in the effectiveness of formative assessment for promoting student…
Structure of Pine Stands in the Southeast
William A. Bechtold; Gregory A. Ruark
1988-01-01
Distributional and statistical information associated with stand age, site index, basal area per acre, number of stems per acre, and stand density index is reported for major pine cover types of the Southeastern United States. Means, standard deviations, and ranges of these variables are listed by State and physiographic region for loblolly, slash, longleaf, pond,...
Acoustic Analysis of PD Speech
Chenausky, Karen; MacAuslan, Joel; Goldhor, Richard
2011-01-01
According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD), with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS) substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication. PMID:21977333
Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, A. M.; Panzeri, M.; Tartakovsky, G. D.
Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scalemore » $$\\eta^*$$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.« less
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Enhanced Cumulative Sum Charts for Monitoring Process Dispersion
Abujiya, Mu’azu Ramat; Riaz, Muhammad; Lee, Muhammad Hisyam
2015-01-01
The cumulative sum (CUSUM) control chart is widely used in industry for the detection of small and moderate shifts in process location and dispersion. For efficient monitoring of process variability, we present several CUSUM control charts for monitoring changes in standard deviation of a normal process. The newly developed control charts based on well-structured sampling techniques - extreme ranked set sampling, extreme double ranked set sampling and double extreme ranked set sampling, have significantly enhanced CUSUM chart ability to detect a wide range of shifts in process variability. The relative performances of the proposed CUSUM scale charts are evaluated in terms of the average run length (ARL) and standard deviation of run length, for point shift in variability. Moreover, for overall performance, we implore the use of the average ratio ARL and average extra quadratic loss. A comparison of the proposed CUSUM control charts with the classical CUSUM R chart, the classical CUSUM S chart, the fast initial response (FIR) CUSUM R chart, the FIR CUSUM S chart, the ranked set sampling (RSS) based CUSUM R chart and the RSS based CUSUM S chart, among others, are presented. An illustrative example using real dataset is given to demonstrate the practicability of the application of the proposed schemes. PMID:25901356
Inter-laboratory validation of bioaccessibility testing for metals.
Henderson, Rayetta G; Verougstraete, Violaine; Anderson, Kim; Arbildua, José J; Brock, Thomas O; Brouwers, Tony; Cappellini, Danielle; Delbeke, Katrien; Herting, Gunilla; Hixon, Greg; Odnevall Wallinder, Inger; Rodriguez, Patricio H; Van Assche, Frank; Wilrich, Peter; Oller, Adriana R
2014-10-01
Bioelution assays are fast, simple alternatives to in vivo testing. In this study, the intra- and inter-laboratory variability in bioaccessibility data generated by bioelution tests were evaluated in synthetic fluids relevant to oral, inhalation, and dermal exposure. Using one defined protocol, five laboratories measured metal release from cobalt oxide, cobalt powder, copper concentrate, Inconel alloy, leaded brass alloy, and nickel sulfate hexahydrate. Standard deviations of repeatability (sr) and reproducibility (sR) were used to evaluate the intra- and inter-laboratory variability, respectively. Examination of the sR:sr ratios demonstrated that, while gastric and lysosomal fluids had reasonably good reproducibility, other fluids did not show as good concordance between laboratories. Relative standard deviation (RSD) analysis showed more favorable reproducibility outcomes for some data sets; overall results varied more between- than within-laboratories. RSD analysis of sr showed good within-laboratory variability for all conditions except some metals in interstitial fluid. In general, these findings indicate that absolute bioaccessibility results in some biological fluids may vary between different laboratories. However, for most applications, measures of relative bioaccessibility are needed, diminishing the requirement for high inter-laboratory reproducibility in absolute metal releases. The inter-laboratory exercise suggests that the degrees of freedom within the protocol need to be addressed. Copyright © 2014 Elsevier Inc. All rights reserved.
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
Water vapor over Europe obtained from remote sensors and compared with a hydrostatic NWP model
NASA Astrophysics Data System (ADS)
Johnsen, K.-P.; Kidder, S. Q.
Due to its high-variability water vapor is a crucial parameter in short-term numerical weather prediction. Integrated water vapor (IWV) data obtained from a network of groundbased Global Positioning System (GPS) receivers mainly over Germany and passive microwave measurements of the Advanced Microwave Sounding Unit (AMSU-A) are compared with the high-resolution regional weather forecast model HRM of the Deutscher Wetterdienst (DWD). Time series of the IWV at 74 GPS stations obtained during the first complete year of the GFZ/GPS network between May 2000 and April 2001 are applied together with colocated forecasts of the HRM model. The low bias (0.08 kg/m 2) between the HRM model and the GPS data can mainly be explained by the bias between the ECMWF analysis data used to initilize the HRM model and the GPS data. The IWV standard deviation between the HRM model and the GPS data during that time is about 2.47 kg/ m2. GPS stations equipped with surface pressure sensors show about 0.29 kg/ m2 lower standard deviation compared with GPS stations with interpolated surface pressure from synoptic stations. The NOAA/NESDIS Total Precipitable Water algorithm is applied to obtain the IWV and to validate the model above the sea. While the mean IWV obtained from the HRM model is about 2.1 kg/ m2 larger than from the AMSU-A data, the standard deviations are 2.46 kg/ m2 (NOAA-15) and 2.29 kg/ m2 (NOAA-16) similar to the IWV standard deviation between HRM and GPS data.
Within-Event and Between-Events Ground Motion Variability from Earthquake Rupture Scenarios
NASA Astrophysics Data System (ADS)
Crempien, Jorge G. F.; Archuleta, Ralph J.
2017-09-01
Measurement of ground motion variability is essential to estimate seismic hazard. Over-estimation of variability can lead to extremely high annual hazard estimates of ground motion exceedance. We explore different parameters that affect the variability of ground motion such as the spatial correlations of kinematic rupture parameters on a finite fault and the corner frequency of the moment-rate spectra. To quantify the variability of ground motion, we simulate kinematic rupture scenarios on several vertical strike-slip faults and compute ground motion using the representation theorem. In particular, for the entire suite of rupture scenarios, we quantify the within-event and the between-events ground motion variability of peak ground acceleration (PGA) and response spectra at several periods, at 40 stations—all approximately at an equal distance of 20 and 50 km from the fault. Both within-event and between-events ground motion variability increase when the slip correlation length on the fault increases. The probability density functions of ground motion tend to truncate at a finite value when the correlation length of slip decreases on the fault, therefore, we do not observe any long-tail distribution of peak ground acceleration when performing several rupture simulations for small correlation lengths. Finally, for a correlation length of 6 km, the within-event and between-events PGA log-normal standard deviations are 0.58 and 0.19, respectively, values slightly smaller than those reported by Boore et al. (Earthq Spectra, 30(3):1057-1085, 2014). The between-events standard deviation is consistently smaller than the within-event for all correlations lengths, a feature that agrees with recent ground motion prediction equations.
Putative golden proportions as predictors of facial esthetics in adolescents.
Kiekens, Rosemie M A; Kuijpers-Jagtman, Anne Marie; van 't Hof, Martin A; van 't Hof, Bep E; Maltha, Jaap C
2008-10-01
In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. Seventy-six adult laypeople evaluated sets of photographs of 64 adolescents on a visual analog scale (VAS) from 0 to 100. The facial esthetic value of each subject was calculated as a mean VAS score. Three observers recorded the position of 13 facial landmarks included in 19 putative golden proportions, based on the golden proportions as defined by Ricketts. The proportions and each proportion's deviation from the golden target (1.618) were calculated. This deviation was then related to the VAS scores. Only 4 of the 19 proportions had a significant negative correlation with the VAS scores, indicating that beautiful faces showed less deviation from the golden standard than less beautiful faces. Together, these variables explained only 16% of the variance. Few golden proportions have a significant relationship with facial esthetics in adolescents. The explained variance of these variables is too small to be of clinical importance.
7 CFR 400.174 - Notification of deviation from financial standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Promoting Increased Pitch Variation in Oral Presentations with Transient Visual Feedback
ERIC Educational Resources Information Center
Hincks, Rebecca; Edlund, Jens
2009-01-01
This paper investigates learner response to a novel kind of intonation feedback generated from speech analysis. Instead of displays of pitch curves, our feedback is flashing lights that show how much pitch variation the speaker has produced. The variable used to generate the feedback is the standard deviation of fundamental frequency as measured…
ERIC Educational Resources Information Center
Mahasneh, Ahmad M.
2014-01-01
The primary purpose of this study is to examine the relationship between goal orientation and parenting styles. Participants of the study completed 650 goal orientation and parenting styles questionnaires. Means, standard deviations, regression and correlation analysis were used for data in establishing the dependence of the two variables. Results…
Associations between heterozygosity and growth rate variables in three western forest trees
Jeffry B. Milton; Peggy Knowles; Kareen B. Sturgeon; Yan B. Linhart; Martha Davis
1981-01-01
For each of three species, quaking aspen, ponderosa pine, and lodgepole pine, we determined the relationships between a ranking of heterozygosity of individuals and measures of growth rate. Genetic variation was assayed by starch gel electrophoresis of enzymes. Growth rates were characterized by the mean, standard deviation, logarithm of the variance, and coefficient...
ERIC Educational Resources Information Center
Gopin, Chaya B.; Berwid, Olga; Marks, David J.; Mlodnicka, Agnieska; Halperin, Jeffrey M.
2013-01-01
Objective: To examine the impact of reinforcement on reaction time (RT) and RT variability (RT standard deviation [RTSD]) in preschoolers with ADHD with and without oppositional defiant disorder (ODD), and a typically developing (TD) comparison group. Method: Participants were administered a computerized task consisting of two conditions: simple…
Over, Thomas M.; Saito, Riki J.; Soong, David T.
2016-06-30
The observed and adjusted values for each streamgage are tabulated. To illustrate the overall effect of the adjustments, differences in the mean, standard deviation, and skewness of the log-transformed observed and urbanization-adjusted peak discharge series by streamgage are computed. For almost every streamgage where an adjustment was applied (no increase in urbanization was reported for a few streamgages), the mean increased and the standard deviation decreased; the effect on skewness values was more variable but usually they increased. Significant positive peak discharge trends were common in the observed values, occurring at 27.3 percent of streamgages at a p-value of 0.05 according to a Kendall’s tau correlation test; in the adjusted values, the incidence of such trends was reduced to 7.0 percent.
Acoustic analysis of speech variables during depression and after improvement.
Nilsonne, A
1987-09-01
Speech recordings were made of 16 depressed patients during depression and after clinical improvement. The recordings were analyzed using a computer program which extracts acoustic parameters from the fundamental frequency contour of the voice. The percent pause time, the standard deviation of the voice fundamental frequency distribution, the standard deviation of the rate of change of the voice fundamental frequency and the average speed of voice change were found to correlate to the clinical state of the patient. The mean fundamental frequency, the total reading time and the average rate of change of the voice fundamental frequency did not differ between the depressed and the improved group. The acoustic measures were more strongly correlated to the clinical state of the patient as measured by global depression scores than to single depressive symptoms such as retardation or agitation.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
Woo, M A; Moser, D K; Stevenson, L W; Stevenson, W G
1997-09-01
The 6-minute walk and heart rate variability have been used to assess mortality risk in patients with heart failure, but their relationship to each other and their usefulness for predicting mortality at 1 year are unknown. To assess the relationships between the 6-minute walk test, heart rate variability, and 1-year mortality. A sample of 113 patients in advanced stages of heart failure (New York Heart Association Functional Class III-IV, left ventricular ejection < 0.25) were studied. All 6-minute walks took place in an enclosed, level, measured corridor and were supervised by the same nurse. Heart rate variability was measured by using (1) a standard-deviation method and (2) Poincaré plots. Data on RR intervals obtained by using 24-hour Holter monitoring were analyzed. Survival was determined at 1 year after the Holter recording. The results showed no significant associations between the results of the 6-minute walk and the two measures of heart rate variability. The results of the walk were related to 1-year mortality but not to the risk of sudden death. Both measures of heart rate variability had significant associations with 1-year mortality and with sudden death. However, only heart rate variability measured by using Poincaré plots was a predictor of total mortality and risk of sudden death, independent of left ventricular ejection fraction, serum levels of sodium, results of the 6-minute walk test, and the standard-deviation measure of heart rate variability. Results of the 6-minute walk have poor association with mortality and the two measures of heart rate variability in patients with advanced-stage heart failure and a low ejection fraction. Further studies are needed to determine the optimal clinical usefulness of the 6-minute walk and heart rate variability in patients with advanced-stage heart failure.
1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...
Qualitative computer aided evaluation of dental impressions in vivo.
Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H
2006-01-01
Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.
Ekwunife, Obinna Ikechukwu; Ezenduka, Charles C; Uzoma, Bede Emeka
2016-01-12
The EQ-5D instrument is arguably the most well-known and commonly used generic measure of health status internationally. Although the instrument has been employed in outcomes studies of diabetes mellitus in many countries, it has not yet been used in Nigeria. This study was carried out to assess the sensitivity of the EQ-5D instrument in a sample of Nigerian patients with type 2 diabetes mellitus (T2DM). A cross-sectional study was conducted using the EQ-5D instrument to assess the self-reported quality of life of patients with T2DM attending two tertiary healthcare facilities in south eastern Nigeria consenting patients completed the questionnaire while waiting to see a doctor. A priori hypotheses were examined using multiple regression analysis to model the relationship between the dependent variables (EQ VAS and EQ-5D Index) and hypothesized independent variables. A total of 226 patients with T2DM participated in the study. The average age of participants was 57 years (standard deviation 10 years) and 61.1% were male. The EQ VAS score and EQ-5D index averaged 66.19 (standard deviation 15.42) and 0.78 (standard deviation 0.21) respectively. Number of diabetic complications, number of co-morbidities, patient's age and being educated predicted EQ VAS score by -6.76, -6.15, -0.22, and 4.51 respectively. Also, number of diabetic complications, number of co-morbidities, patient's age and being educated predicted EQ-5D index by -0.12, -0.07, -0.003, and 0.06 respectively.. Our findings indicate that the EQ-5D could adequately capture the burden of type 2 diabetes and related complications among Nigerian patients.
Office and 24-hour heart rate and target organ damage in hypertensive patients
2012-01-01
Background We investigated the association between heart rate and its variability with the parameters that assess vascular, renal and cardiac target organ damage. Methods A cross-sectional study was performed including a consecutive sample of 360 hypertensive patients without heart rate lowering drugs (aged 56 ± 11 years, 64.2% male). Heart rate (HR) and its standard deviation (HRV) in clinical and 24-hour ambulatory monitoring were evaluated. Renal damage was assessed by glomerular filtration rate and albumin/creatinine ratio; vascular damage by carotid intima-media thickness and ankle/brachial index; and cardiac damage by the Cornell voltage-duration product and left ventricular mass index. Results There was a positive correlation between ambulatory, but not clinical, heart rate and its standard deviation with glomerular filtration rate, and a negative correlation with carotid intima-media thickness, and night/day ratio of systolic and diastolic blood pressure. There was no correlation with albumin/creatinine ratio, ankle/brachial index, Cornell voltage-duration product or left ventricular mass index. In the multiple linear regression analysis, after adjusting for age, the association of glomerular filtration rate and intima-media thickness with ambulatory heart rate and its standard deviation was lost. According to the logistic regression analysis, the predictors of any target organ damage were age (OR = 1.034 and 1.033) and night/day systolic blood pressure ratio (OR = 1.425 and 1.512). Neither 24 HR nor 24 HRV reached statistical significance. Conclusions High ambulatory heart rate and its variability, but not clinical HR, are associated with decreased carotid intima-media thickness and a higher glomerular filtration rate, although this is lost after adjusting for age. Trial Registration ClinicalTrials.gov: NCT01325064 PMID:22439900
Nagel, Christina; Trenk, Lisa; Aurich, Christine; Ille, Natascha; Pichler, Martina; Drillich, Marc; Pohl, Werner; Aurich, Jörg
2016-03-15
Increased cortisol release in parturient cows may either represent a stress response or is part of the endocrine changes that initiate calving. Acute stress elicits an increase in heart rate and decrease in heart rate variability (HRV). Therefore, we analyzed cortisol concentration, heart rate and HRV variables standard deviation of beat-to-beat interval (SDRR) and root mean square of successive beat-to-beat intervals (RMSSD) in dairy cows allowed to calve spontaneously (SPON, n = 6) or with PGF2α-induced preterm parturition (PG, n = 6). We hypothesized that calving is a stressor, but induced parturition is less stressful than term calving. Saliva collection for cortisol analysis and electrocardiogram recordings for heart rate and HRV analysis were performed from 32 hours before to 18.3 ± 0.7 hours after delivery. Cortisol concentration increased in SPON and PG cows, peaked 15 minutes after delivery (P < 0.001) but was higher in SPON versus PG cows (P < 0.001) during and within 2 hours after calving. Heart rate peaked during the expulsive phase of labor and was higher in SPON than in PG cows (time × group P < 0.01). The standard deviation of beat-to-beat interval and RMSSD peaked at the end of the expulsive phase of labor (P < 0.001), indicating high vagal activity. Standard deviation of beat-to-beat interval (P < 0.01) and RMSSD (P < 0.05) were higher in SPON versus PG cows. Based on physiological stress parameters, calving is perceived as stressful but expulsion of the calf is associated with a transiently increased vagal tone which may enhance uterine contractility. Copyright © 2016 Elsevier Inc. All rights reserved.
Miyoshi, Toru; Suetsuna, Ryoji; Tokunaga, Naoto; Kusaka, Masayasu; Tsuzaki, Ryuichiro; Koten, Kazuya; Kunihisa, Kohno; Ito, Hiroshi
2017-07-01
The blood pressure variability (BPV) such as visit-to-visit, day-by-day, and ambulatory BPV has been also shown to be a risk of future cardiovascular events. However, the effects of antihypertensive therapy on BPV remain unclear. The purpose of this study was to evaluate the effect of azilsartan after switching from another angiotensin II receptor blocker (ARB) on day-to-day BPV in home BP monitoring. This prospective, multicenter, open-labeled, single-arm study included 28 patients undergoing treatment with an ARB, which was switched to azilsartan after enrollment. The primary outcome was the change in the mean of the standard deviation and the coefficient of variation of morning home BP for 5 consecutive days from baseline to the 24-week follow-up. The secondary outcome was the change in arterial stiffness measured by the cardio-ankle vascular index. The mean BPs in the morning and evening for 5 days did not statistically differ between baseline and 24 weeks. For the morning BP, the means of the standard deviations and coefficient of variation of the systolic BP were significantly decreased from 7.4 ± 3.6 mm Hg to 6.1 ± 3.2 mm Hg and from 5.4±2.7% to 4.6±2.3% (mean ± standard deviation, P = 0.04 and P = 0.04, respectively). For the evening BP, no significant change was observed in the systolic or diastolic BPV. The cardio-ankle vascular index significantly decreased from 8.3 ± 0.8 to 8.1 ± 0.8 (P = 0.03). Switching from another ARB to azilsartan reduced day-to-day BPV in the morning and improved arterial stiffness.
Miyoshi, Toru; Suetsuna, Ryoji; Tokunaga, Naoto; Kusaka, Masayasu; Tsuzaki, Ryuichiro; Koten, Kazuya; Kunihisa, Kohno; Ito, Hiroshi
2017-01-01
Background The blood pressure variability (BPV) such as visit-to-visit, day-by-day, and ambulatory BPV has been also shown to be a risk of future cardiovascular events. However, the effects of antihypertensive therapy on BPV remain unclear. The purpose of this study was to evaluate the effect of azilsartan after switching from another angiotensin II receptor blocker (ARB) on day-to-day BPV in home BP monitoring. Methods This prospective, multicenter, open-labeled, single-arm study included 28 patients undergoing treatment with an ARB, which was switched to azilsartan after enrollment. The primary outcome was the change in the mean of the standard deviation and the coefficient of variation of morning home BP for 5 consecutive days from baseline to the 24-week follow-up. The secondary outcome was the change in arterial stiffness measured by the cardio-ankle vascular index. Results The mean BPs in the morning and evening for 5 days did not statistically differ between baseline and 24 weeks. For the morning BP, the means of the standard deviations and coefficient of variation of the systolic BP were significantly decreased from 7.4 ± 3.6 mm Hg to 6.1 ± 3.2 mm Hg and from 5.4±2.7% to 4.6±2.3% (mean ± standard deviation, P = 0.04 and P = 0.04, respectively). For the evening BP, no significant change was observed in the systolic or diastolic BPV. The cardio-ankle vascular index significantly decreased from 8.3 ± 0.8 to 8.1 ± 0.8 (P = 0.03). Conclusions Switching from another ARB to azilsartan reduced day-to-day BPV in the morning and improved arterial stiffness. PMID:28611863
Comparison of three-dimensional multi-segmental foot models used in clinical gait laboratories.
Nicholson, Kristen; Church, Chris; Takata, Colton; Niiler, Tim; Chen, Brian Po-Jung; Lennon, Nancy; Sees, Julie P; Henley, John; Miller, Freeman
2018-05-16
Many skin-mounted three-dimensional multi-segmented foot models are currently in use for gait analysis. Evidence regarding the repeatability of models, including between trial and between assessors, is mixed, and there are no between model comparisons of kinematic results. This study explores differences in kinematics and repeatability between five three-dimensional multi-segmented foot models. The five models include duPont, Heidelberg, Oxford Child, Leardini, and Utah. Hind foot, forefoot, and hallux angles were calculated with each model for ten individuals. Two physical therapists applied markers three times to each individual to assess within and between therapist variability. Standard deviations were used to evaluate marker placement variability. Locally weighted regression smoothing with alpha-adjusted serial T tests analysis was used to assess kinematic similarities. All five models had similar variability, however, the Leardini model showed high standard deviations in plantarflexion/dorsiflexion angles. P-value curves for the gait cycle were used to assess kinematic similarities. The duPont and Oxford models had the most similar kinematics. All models demonstrated similar marker placement variability. Lower variability was noted in the sagittal and coronal planes compared to rotation in the transverse plane, suggesting a higher minimal detectable change when clinically considering rotation and a need for additional research. Between the five models, the duPont and Oxford shared the most kinematic similarities. While patterns of movement were very similar between all models, offsets were often present and need to be considered when evaluating published data. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lavalle, M.; Lee, A.; Shiroma, G. X. H.; Rosen, P. A.
2017-12-01
The NASA-ISRO SAR (NISAR) mission will deliver unprecedented global maps of L-band HH/HV backscatter every 12 days with resolution ranging from a few to tens of meters in support of ecosystem, solid Earth and cryosphere science and applications. Understanding and modeling the temporal variability of L-band backscatter over temporal scales of years, months and days is critical for developing retrieval algorithms that can robustly extract the biophysical variables of interest (e.g., forest biomass, soil moisture, etc.) from NISAR time series. In this talk, we will focus on the 5-year time series of 60 JPL/UAVSAR polarimetric images collected near the Sacramento Delta to characterize the inter-annual, seasonal and short-scale variability of the L-band polarimetric backscatter for a broad range of land cover types. Our preliminary analysis reveals that backscatter from man-made structures is very stable over time, whereas backscatter from bare soil and herbaceous vegetation fluctuates over time with standard deviation of 2.3 dB. Land-cover classes with larger biomass such as trees and tall vegetation show about 1.5 dB standard deviation in temporal backscatter variability. Closer examination of high-spatial resolution UAVSAR imagery reveal also that vegetation structure, speckle noise and horizontal forest heterogeneity in the Sacramento Delta area can significantly affect the point-wise backscatter value. In our talk, we will illustrate the long UAVSAR time series, describe our data analysis strategy, show the results of polarimetric variability for different land cover classes and number of looks, and discuss the implications for the development of NISAR L2/L3 retrieval algorithms of ecosystem science.
Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide
1981-02-01
SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway
Hug, François; Drouet, Jean Marc; Champoux, Yvan; Couturier, Antoine; Dorel, Sylvain
2008-11-01
The aim of this study was to determine whether high inter-individual variability of the electromyographic (EMG) patterns during pedaling is accompanied by variability in the pedal force application patterns. Eleven male experienced cyclists were tested at two submaximal power outputs (150 and 250 W). Pedal force components (effective and total forces) and index of mechanical effectiveness were measured continuously using instrumented pedals and were synchronized with surface electromyography signals measured in ten lower limb muscles. The intersubject variability of EMG and mechanical patterns was assessed using standard deviation, mean deviation, variance ratio and coefficient of cross-correlation (_R(0), with lag time = 0). The results demonstrated a high intersubject variability of EMG patterns at both exercise intensities for biarticular muscles as a whole (and especially for Gastrocnemius lateralis and Rectus femoris) and for one monoarticular muscle (Tibialis anterior). However, this heterogeneity of EMG patterns is not accompanied by a so high intersubject variability in pedal force application patterns. A very low variability in the three mechanical profiles (effective force, total force and index of mechanical effectiveness) was obtained in the propulsive downstroke phase, although a greater variability in these mechanical patterns was found during upstroke and around the top dead center, and at 250 W when compared to 150 W. Overall, these results provide additional evidence for redundancy in the neuromuscular system.
Temporal variability of spectro-temporal receptive fields in the anesthetized auditory cortex.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn
2014-01-01
Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF) estimates with characteristic temporal resolution 5-30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms) overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.
Segmentation of Natural Gas Customers in Industrial Sector Using Self-Organizing Map (SOM) Method
NASA Astrophysics Data System (ADS)
Masbar Rus, A. M.; Pramudita, R.; Surjandari, I.
2018-03-01
The usage of the natural gas which is non-renewable energy, needs to be more efficient. Therefore, customer segmentation becomes necessary to set up a marketing strategy to be right on target or to determine an appropriate fee. This research was conducted at PT PGN using one of data mining method, i.e. Self-Organizing Map (SOM). The clustering process is based on the characteristic of its customers as a reference to create the customer segmentation of natural gas customers. The input variables of this research are variable of area, type of customer, the industrial sector, the average usage, standard deviation of the usage, and the total deviation. As a result, 37 cluster and 9 segment from 838 customer data are formed. These 9 segments then employed to illustrate the general characteristic of the natural gas customer of PT PGN.
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Ansermot, Nicolas; Rudaz, Serge; Brawand-Amey, Marlyse; Fleury-Souverain, Sandrine; Veuthey, Jean-Luc; Eap, Chin B
2009-08-01
Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
Middle Atmosphere Program. Handbook for MAP, Volume 5
NASA Technical Reports Server (NTRS)
Sechrist, C. F., Jr. (Editor)
1982-01-01
The variability of the stratosphere during the winter in the Northern Hemisphere is considered. Long term monthly mean 30-mbar maps are presented that include geopotential heights, temperatures, and standard deviations of 15 year averages. Latitudinal profiles of mean zonal winds and temperatures are given along with meridional time sections of derived quantities for the winters 1965/66 to 1980/81.
ERIC Educational Resources Information Center
Bettinger, Eric; Fox, Lindsay; Loeb, Susanna; Taylor, Eric
2015-01-01
Online college courses are a rapidly expanding feature of higher education, yet little research identifies their effects. Using an instrumental variables approach and data from DeVry University, this study finds that, on average, online course-taking reduces student learning by one-third to one-quarter of a standard deviation compared to…
Protocol deviations before and after IV tPA in community hospitals
Adelman, Eric E.; Scott, Phillip A.; Skolarus, Lesli E.; Fox, Allison K.; Frederiksen, Shirley M.; Meurer, William J.
2015-01-01
Background Protocol deviations before and after tPA treatment for ischemic stroke are common. It is unclear if patient or hospital factors predict protocol deviations. We examined predictors of protocol deviations and the effects of protocol violations on symptomatic intracerebral hemorrhage. Methods We used data from the INSTINCT trial, a cluster-randomized, controlled trial evaluating the efficacy of a barrier assessment and educational intervention to increase appropriate tPA use in 24 Michigan community hospitals, to review tPA treatments between 2007 and 2010. Protocol violations were defined as deviations from the standard tPA protocol, both before and after treatment. Multi-level logistic regression models were fitted to determine if patient and hospital variables were associated with pre-treatment or post-treatment protocol deviations. Results During the study, 557 patients (mean age 70; 52% male; median NIHSS 12) were treated with tPA. Protocol deviations occurred in 233 (42%) patients: 16% had pre-treatment deviations, 35% had post-treatment deviations, and 9% had both. The most common protocol deviations included elevated post-treatment blood pressure, antithrombotic agent use within 24 hours of treatment, and elevated pre-treatment blood pressure. Protocol deviations were not associated with symptomatic intracerebral hemorrhage, stroke severity, or hospital factors. Older age was associated with pre-treatment protocol deviations (adjusted OR 0.52; 95% confidence interval 0.30-0.92). Pre-treatment deviations were associated with post-treatment deviations (adjusted OR 3.20; 95% confidence interval 1.91-5.35). Conclusions Protocol deviations were not associated with symptomatic intracerebral hemorrhage. Aside from age, patient and hospital factors were not associated with protocol deviations. PMID:26419527
Moraes, Eder Rezende; Murta, Luiz Otavio; Baffa, Oswaldo; Wakai, Ronald T; Comani, Silvia
2012-10-01
We analyzed the effectiveness of linear short- and long-term variability time domain parameters, an index of sympatho-vagal balance (SDNN/RMSSD) and entropy in differentiating fetal heart rate patterns (fHRPs) on the fetal heart rate (fHR) series of 5, 3 and 2 min duration reconstructed from 46 fetal magnetocardiograms. Gestational age (GA) varied from 21 to 38 weeks. FHRPs were classified based on the fHR standard deviation. In sleep states, we observed that vagal influence increased with GA, and entropy significantly increased (decreased) with GA (SDNN/RMSSD), demonstrating that a prevalence of vagal activity with autonomous nervous system maturation may be associated with increased sleep state complexity. In active wakefulness, we observed a significant negative (positive) correlation of short-term (long-term) variability parameters with SDNN/RMSSD. ANOVA statistics demonstrated that long-term irregularity and standard deviation of normal-to-normal beat intervals (SDNN) best differentiated among fHRPs. Our results confirm that short- and long-term variability parameters are useful to differentiate between quiet and active states, and that entropy improves the characterization of sleep states. All measures differentiated fHRPs more effectively on very short HR series, as a result of the fMCG high temporal resolution and of the intrinsic timescales of the events that originate the different fHRPs.
A Model Independent General Search for new physics in ATLAS
NASA Astrophysics Data System (ADS)
Amoroso, S.; ATLAS Collaboration
2016-04-01
We present results of a model-independent general search for new phenomena in proton-proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS detector at the LHC. The data set corresponds to a total integrated luminosity of 20.3 fb-1. Event topologies involving isolated electrons, photons and muons, as well as jets, including those identified as originating from b-quarks (b-jets) and missing transverse momentum are investigated. The events are subdivided according to their final states into exclusive event classes. For the 697 classes with a Standard Model expectation greater than 0.1 events, a search algorithm tests the compatibility of data against the Monte Carlo simulated background in three kinematic variables sensitive to new physics effects. No significant deviation is found in data. The number and size of the observed deviations follow the Standard Model expectation obtained from simulated pseudo-experiments.
Measurements of propeller noise in a light turboprop airplane
NASA Technical Reports Server (NTRS)
Wilby, J. F.; Wilby, E. G.
1987-01-01
In-flight acoustic measurements have been made on the exterior and interior of a twin-engined turboprop airplane under controlled conditions to study data repeatability. It is found that the variability of the harmonic sound pressure levels in the cabin is greater than that for the exterior sound pressure levels, typical values for the standard deviation being +2.0 dB and -4.2 dB for the interior, versus +1.4 dB and -2.3 dB for the exterior. When insertion losses are determined for acoustic treatments in the cabin, the standard deviations of the data are typically + or - 6.5 dB. It is concluded that additional factors, such as accurate and repeatable selection of relative phase between propellers, controlled cabin-air-temperatures, installation of baseline acoustic absorption, and measurement of aircraft attitude, should be considered in order to reduce uncertainty in the measured data.
Routine sampling and the control of Legionella spp. in cooling tower water systems.
Bentham, R H
2000-10-01
Cooling water samples from 31 cooling tower systems were cultured for Legionella over a 16-week summer period. The selected systems were known to be colonized by Legionella. Mean Legionella counts and standard deviations were calculated and time series correlograms prepared for each system. The standard deviations of Legionella counts in all the systems were very large, indicating great variability in the systems over the time period. Time series analyses demonstrated that in the majority of cases there was no significant relationship between the Legionella counts in the cooling tower at time of collection and the culture result once it was available. In the majority of systems (25/28), culture results from Legionella samples taken from the same systems 2 weeks apart were not statistically related. The data suggest that determinations of health risks from cooling towers cannot be reliably based upon single or infrequent Legionella tests.
A posteriori noise estimation in variable data sets. With applications to spectra and light curves
NASA Astrophysics Data System (ADS)
Czesla, S.; Molle, T.; Schmitt, J. H. M. M.
2018-01-01
Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.
NASA Technical Reports Server (NTRS)
Wetzel, Peter J.; Chang, Jy-Tai
1988-01-01
Observations of surface heterogeneity of soil moisture from scales of meters to hundreds of kilometers are discussed, and a relationship between grid element size and soil moisture variability is presented. An evapotranspiration model is presented which accounts for the variability of soil moisture, standing surface water, and vegetation internal and stomatal resistance to moisture flow from the soil. The mean values and standard deviations of these parameters are required as input to the model. Tests of this model against field observations are reported, and extensive sensitivity tests are presented which explore the importance of including subgrid-scale variability in an evapotranspiration model.
Koch, Iris; Reimer, Kenneth J; Bakker, Martine I; Basta, Nicholas T; Cave, Mark R; Denys, Sébastien; Dodd, Matt; Hale, Beverly A; Irwin, Rob; Lowney, Yvette W; Moore, Margo M; Paquin, Viviane; Rasmussen, Pat E; Repaso-Subang, Theresa; Stephenson, Gladys L; Siciliano, Steven D; Wragg, Joanna; Zagury, Gerald J
2013-01-01
Bioaccessibility is a measurement of a substance's solubility in the human gastro-intestinal system, and is often used in the risk assessment of soils. The present study was designed to determine the variability among laboratories using different methods to measure the bioaccessibility of 24 inorganic contaminants in one standardized soil sample, the standard reference material NIST 2710. Fourteen laboratories used a total of 17 bioaccessibility extraction methods. The variability between methods was assessed by calculating the reproducibility relative standard deviations (RSDs), where reproducibility is the sum of within-laboratory and between-laboratory variability. Whereas within-laboratory repeatability was usually better than (<) 15% for most elements, reproducibility RSDs were much higher, indicating more variability, although for many elements they were comparable to typical uncertainties (e.g., 30% in commercial laboratories). For five trace elements of interest, reproducibility RSDs were: arsenic (As), 22-44%; cadmium (Cd), 11-41%; Cu, 15-30%; lead (Pb), 45-83%; and Zn, 18-56%. Only one method variable, pH, was found to correlate significantly with bioaccessibility for aluminum (Al), Cd, copper (Cu), manganese (Mn), Pb and zinc (Zn) but other method variables could not be examined systematically because of the study design. When bioaccessibility results were directly compared with bioavailability results for As (swine and mouse) and Pb (swine), four methods returned results within uncertainty ranges for both elements: two that were defined as simpler (gastric phase only, limited chemicals) and two were more complex (gastric + intestinal phases, with a mixture of chemicals).
NASA Astrophysics Data System (ADS)
Alamirew, Netsanet K.; Todd, Martin C.; Ryder, Claire L.; Marsham, John H.; Wang, Yi
2018-01-01
The Saharan heat low (SHL) is a key component of the west African climate system and an important driver of the west African monsoon across a range of timescales of variability. The physical mechanisms driving the variability in the SHL remain uncertain, although water vapour has been implicated as of primary importance. Here, we quantify the independent effects of variability in dust and water vapour on the radiation budget and atmospheric heating of the region using a radiative transfer model configured with observational input data from the Fennec field campaign at the location of Bordj Badji Mokhtar (BBM) in southern Algeria (21.4° N, 0.9° E), close to the SHL core for June 2011. Overall, we find dust aerosol and water vapour to be of similar importance in driving variability in the top-of-atmosphere (TOA) radiation budget and therefore the column-integrated heating over the SHL (˜ 7 W m-2 per standard deviation of dust aerosol optical depth - AOD). As such, we infer that SHL intensity is likely to be similarly enhanced by the effects of dust and water vapour surge events. However, the details of the processes differ. Dust generates substantial radiative cooling at the surface (˜ 11 W m-2 per standard deviation of dust AOD), presumably leading to reduced sensible heat flux in the boundary layer, which is more than compensated by direct radiative heating from shortwave (SW) absorption by dust in the dusty boundary layer. In contrast, water vapour invokes a radiative warming at the surface of ˜ 6 W m-2 per standard deviation of column-integrated water vapour in kg m-2. Net effects involve a pronounced net atmospheric radiative convergence with heating rates on average of 0.5 K day-1 and up to 6 K day-1 during synoptic/mesoscale dust events from monsoon surges and convective cold-pool outflows (haboobs
). On this basis, we make inferences on the processes driving variability in the SHL associated with radiative and advective heating/cooling. Depending on the synoptic context over the region, processes driving variability involve both independent effects of water vapour and dust and compensating events in which dust and water vapour are co-varying. Forecast models typically have biases of up to 2 kg m-2 in column-integrated water vapour (equivalent to a change in 2.6 W m-2 TOA net flux) and typically lack variability in dust and thus are expected to poorly represent these couplings. An improved representation of dust and water vapour and quantification of associated radiative impact in models is thus imperative to further understand the SHL and related climate processes.
NASA Astrophysics Data System (ADS)
Lavely, Adam; Vijayakumar, Ganesh; Brasseur, James; Paterson, Eric; Kinzel, Michael
2011-11-01
Using large-eddy simulation (LES) of the neutral and moderately convective atmospheric boundary layers (NBL, MCBL), we analyze the impact of coherent turbulence structure of the atmospheric surface layer on the short-time statistics that are commonly collected from wind turbines. The incoming winds are conditionally sampled with a filtering and thresholding algorithm into high/low horizontal and vertical velocity fluctuation coherent events. The time scales of these events are ~5 - 20 blade rotations and are roughly twice as long in the MCBL as the NBL. Horizontal velocity events are associated with greater variability in rotor power, lift and blade-bending moment than vertical velocity events. The variability in the industry standard 10 minute average for rotor power, sectional lift and wind velocity had a standard deviation of ~ 5% relative to the ``infinite time'' statistics for the NBL and ~10% for the MCBL. We conclude that turbulence structure associated with atmospheric stability state contributes considerable, quantifiable, variability to wind turbine statistics. Supported by NSF and DOE.
NASA Astrophysics Data System (ADS)
Grossi, Claudia; Vogel, Felix R.; Curcoll, Roger; Àgueda, Alba; Vargas, Arturo; Rodó, Xavier; Morguí, Josep-Anton
2018-04-01
The ClimaDat station at Gredos (GIC3) has been continuously measuring atmospheric (dry air) mixing ratios of carbon dioxide (CO2) and methane (CH4), as well as meteorological parameters, since November 2012. In this study we investigate the atmospheric variability of CH4 mixing ratios between 2013 and 2015 at GIC3 with the help of co-located observations of 222Rn concentrations, modelled 222Rn fluxes and modelled planetary boundary layer heights (PBLHs). Both daily and seasonal changes in atmospheric CH4 can be better understood with the help of atmospheric concentrations of 222Rn (and the corresponding fluxes). On a daily timescale, the variation in the PBLH is the main driver for 222Rn and CH4 variability while, on monthly timescales, their atmospheric variability seems to depend on emission changes. To understand (changing) CH4 emissions, nocturnal fluxes of CH4 were estimated using two methods: the radon tracer method (RTM) and a method based on the EDGARv4.2 bottom-up emission inventory, both using FLEXPARTv9.0.2 footprints. The mean value of RTM-based methane fluxes (FR_CH4) is 0.11 mg CH4 m-2 h-1 with a standard deviation of 0.09 or 0.29 mg CH4 m-2 h-1 with a standard deviation of 0.23 mg CH4 m-2 h-1 when using a rescaled 222Rn map (FR_CH4_rescale). For our observational period, the mean value of methane fluxes based on the bottom-up inventory (FE_CH4) is 0.33 mg CH4 m-2 h-1 with a standard deviation of 0.08 mg CH4 m-2 h-1. Monthly CH4 fluxes based on RTM (both FR_CH4 and FR_CH4_rescale) show a seasonality which is not observed for monthly FE_CH4 fluxes. During January-May, RTM-based CH4 fluxes present mean values 25 % lower than during June-December. This seasonal increase in methane fluxes calculated by RTM for the GIC3 area appears to coincide with the arrival of transhumant livestock at GIC3 in the second half of the year.
Schmidt, Alice; Aurich, Jörg; Möstl, Erich; Müller, Jürgen; Aurich, Christine
2010-09-01
Based on cortisol release, a variety of situations to which domestic horses are exposed have been classified as stressors but studies on the stress during equestrian training are limited. In the present study, Warmblood stallions (n=9) and mares (n=7) were followed through a 9 respective 12-week initial training program in order to determine potentially stressful training steps. Salivary cortisol concentrations, beat-to-beat (RR) interval and heart rate variability (HRV) were determined. The HRV variables standard deviation of the RR interval (SDRR), RMSSD (root mean square of successive RR differences) and the geometric means standard deviation 1 (SD1) and 2 (SD2) were calculated. Nearly each training unit was associated with an increase in salivary cortisol concentrations (p<0.01). Cortisol release varied between training units and occasionally was more pronounced in mares than in stallions (p<0.05). The RR interval decreased slightly in response to lunging before mounting of the rider. A pronounced decrease occurred when the rider was mounting, but before the horse showed physical activity (p<0.001). The HRV variables SDRR, RMSSD and SD1 decreased in response to training and lowest values were reached during mounting of a rider (p<0.001). Thereafter RR interval and HRV variables increased again. In contrast, SD2 increased with the beginning of lunging (p<0.05) and no changes in response to mounting were detectable. In conclusion, initial training is a stressor for horses. The most pronounced reaction occurred in response to mounting by a rider, a situation resembling a potentially lethal threat under natural conditions. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
Susanne Winter; Andreas Böck; Ronald E. McRoberts
2012-01-01
Tree diameter and height are commonly measured forest structural variables, and indicators based on them are candidates for assessing forest diversity. We conducted our study on the uncertainty of estimates for mostly large geographic scales for four indicators of forest structural gamma diversity: mean tree diameter, mean tree height, and standard deviations of tree...
Mentors Offering Maternal Support (M.O.M.S.)
2011-08-02
at Sessions 1, 5, and 8. Table 1. Pretest - posttest , randomized, controlled, repeated measured design Experimental Intervention Sessions...theoretical mediators of self-esteem and emotional support (0.6 standard deviation change from pretest to posttest ) with reduction of effect to 0.4...always brought back to the designated topic . In order to have statistically significant results for the outcome variables the study sessions must
Rivera, Ana Leonor; Estañol, Bruno; Sentíes-Madrid, Horacio; Fossion, Ruben; Toledo-Roy, Juan C.; Mendoza-Temis, Joel; Morales, Irving O.; Landa, Emmanuel; Robles-Cabrera, Adriana; Moreno, Rene; Frank, Alejandro
2016-01-01
Diabetes Mellitus (DM) affects the cardiovascular response of patients. To study this effect, interbeat intervals (IBI) and beat-to-beat systolic blood pressure (SBP) variability of patients during supine, standing and controlled breathing tests were analyzed in the time domain. Simultaneous noninvasive measurements of IBI and SBP for 30 recently diagnosed and 15 long-standing DM patients were compared with the results for 30 rigorously screened healthy subjects (control). A statistically significant distinction between control and diabetic subjects was provided by the standard deviation and the higher moments of the distributions (skewness, and kurtosis) with respect to the median. To compare IBI and SBP for different populations, we define a parameter, α, that combines the variability of the heart rate and the blood pressure, as the ratio of the radius of the moments for IBI and the same radius for SBP. As diabetes evolves, α decreases, standard deviation of the IBI detrended signal diminishes (heart rate signal becomes more “rigid”), skewness with respect to the median approaches zero (signal fluctuations gain symmetry), and kurtosis increases (fluctuations concentrate around the median). Diabetes produces not only a rigid heart rate, but also increases symmetry and has leptokurtic distributions. SBP time series exhibit the most variable behavior for recently diagnosed DM with platykurtic distributions. Under controlled breathing, SBP has symmetric distributions for DM patients, while control subjects have non-zero skewness. This may be due to a progressive decrease of parasympathetic and sympathetic activity to the heart and blood vessels as diabetes evolves. PMID:26849653
Rivera, Ana Leonor; Estañol, Bruno; Sentíes-Madrid, Horacio; Fossion, Ruben; Toledo-Roy, Juan C; Mendoza-Temis, Joel; Morales, Irving O; Landa, Emmanuel; Robles-Cabrera, Adriana; Moreno, Rene; Frank, Alejandro
2016-01-01
Diabetes Mellitus (DM) affects the cardiovascular response of patients. To study this effect, interbeat intervals (IBI) and beat-to-beat systolic blood pressure (SBP) variability of patients during supine, standing and controlled breathing tests were analyzed in the time domain. Simultaneous noninvasive measurements of IBI and SBP for 30 recently diagnosed and 15 long-standing DM patients were compared with the results for 30 rigorously screened healthy subjects (control). A statistically significant distinction between control and diabetic subjects was provided by the standard deviation and the higher moments of the distributions (skewness, and kurtosis) with respect to the median. To compare IBI and SBP for different populations, we define a parameter, α, that combines the variability of the heart rate and the blood pressure, as the ratio of the radius of the moments for IBI and the same radius for SBP. As diabetes evolves, α decreases, standard deviation of the IBI detrended signal diminishes (heart rate signal becomes more "rigid"), skewness with respect to the median approaches zero (signal fluctuations gain symmetry), and kurtosis increases (fluctuations concentrate around the median). Diabetes produces not only a rigid heart rate, but also increases symmetry and has leptokurtic distributions. SBP time series exhibit the most variable behavior for recently diagnosed DM with platykurtic distributions. Under controlled breathing, SBP has symmetric distributions for DM patients, while control subjects have non-zero skewness. This may be due to a progressive decrease of parasympathetic and sympathetic activity to the heart and blood vessels as diabetes evolves.
Ambulatory blood pressure profiles in familial dysautonomia.
Goldberg, Lior; Bar-Aluma, Bat-El; Krauthammer, Alex; Efrati, Ori; Sharabi, Yehonatan
2018-02-12
Familial dysautonomia (FD) is a rare genetic disease that involves extreme blood pressure fluctuations secondary to afferent baroreflex failure. The diurnal blood pressure profile, including the average, variability, and day-night difference, may have implications for long-term end organ damage. The purpose of this study was to describe the circadian pattern of blood pressure in the FD population and relationships with renal and pulmonary function, use of medications, and overall disability. We analyzed 24-h ambulatory blood pressure monitoring recordings in 22 patients with FD. Information about medications, disease severity, renal function (estimated glomerular filtration, eGFR), pulmonary function (forced expiratory volume in 1 s, FEV1) and an index of blood pressure variability (standard deviation of systolic pressure) were analyzed. The mean (± SEM) 24-h blood pressure was 115 ± 5.6/72 ± 2.0 mmHg. The diurnal blood pressure variability was high (daytime systolic pressure standard deviation 22.4 ± 1.5 mmHg, nighttime 17.2 ± 1.6), with a high frequency of a non-dipping pattern (16 patients, 73%). eGFR, use of medications, FEV1, and disability scores were unrelated to the degree of blood pressure variability or to dipping status. This FD cohort had normal average 24-h blood pressure, fluctuating blood pressure, and a high frequency of non-dippers. Although there was evidence of renal dysfunction based on eGFR and proteinuria, the ABPM profile was unrelated to the measures of end organ dysfunction or to reported disability.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
NASA Technical Reports Server (NTRS)
Rind, D.; Suozzo, R.; Balachandran, N. K.
1988-01-01
The variability which arises in the GISS Global Climate-Middle Atmosphere Model on two time scales is reviewed: interannual standard deviations, derived from the five-year control run, and intraseasonal variability as exemplified by statospheric warnings. The model's extratropical variability for both mean fields and eddy statistics appears reasonable when compared with observations, while the tropical wind variability near the stratopause may be excessive possibly, due to inertial oscillations. Both wave 1 and wave 2 warmings develop, with connections to tropospheric forcing. Variability on both time scales results from a complex set of interactions among planetary waves, the mean circulation, and gravity wave drag. Specific examples of these interactions are presented, which imply that variability in gravity wave forcing and drag may be an important component of the variability of the middle atmosphere.
Noftle, Erik E; Fleeson, William
2010-03-01
In 3 intensive cross-sectional studies, age differences in behavior averages and variabilities were examined. Three questions were posed: Does variability differ among age groups? Does the sizable variability in young adulthood persist throughout the life span? Do past conclusions about trait development, based on trait questionnaires, hold up when actual behavior is examined? Three groups participated: young adults (18-23 years), middle-aged adults (35-55 years), and older adults (65-81 years). In 2 experience-sampling studies, participants reported their current behavior multiple times per day for 1- or 2-week spans. In a 3rd study, participants interacted in standardized laboratory activities on 8 occasions. First, results revealed a sizable amount of intraindividual variability in behavior for all adult groups, with average within-person standard deviations ranging from about half a point to well over 1 point on 6-point scales. Second, older adults were most variable in Openness, whereas young adults were most variable in Agreeableness and Emotional Stability. Third, most specific patterns of maturation-related age differences in actual behavior were more greatly pronounced and differently patterned than those revealed by the trait questionnaire method. When participants interacted in standardized situations, personality differences between young adults and middle-aged adults were larger, and older adults exhibited a more positive personality profile than they exhibited in their everyday lives.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Multiple-wavelength transmission measurements in rocket motor plumes
NASA Astrophysics Data System (ADS)
Kim, Hong-On
1991-09-01
Multiple-wavelength light transmission measurements were used to measure the mean particle size (d(sub 32)), index of refraction (m), and standard deviation of the small particles in the edge of the plume of a small solid propellant rocket motor. The results have shown that the multiple-wavelength light transmission measurement technique can be used to obtain these variables. The technique was shown to be more sensitive to changes in d(sub 32) and standard deviation (sigma) than to m. A GAP/AP/4.7 percent aluminum propellant burned at 25 atm produced particles with d32 = 0.150 +/- 0.006 microns, standard deviation = 1.50 +/- 0.04 and m = 1.63 +/- 0.13. The good correlation of the data indicated that only submicron particles were present in the edge of the plume. In today's budget conscious industry, the solid propellant rocket motor is an ideal propulsion system due to its low cost and simplicity. The major obstacle for solid rocket motors, however, is their limited specific impulse compared to airbreathing motors. One way to help overcome this limitation is to utilize metal fuel additives. Solid propellant rocket motors can achieve high specific impulse with metal fuel additives such as aluminum. Aluminum propellants also increase propellant densities and suppress transverse modes of combustion oscillations by damping the oscillations with the aluminum agglomerates in the combustion chamber.
The performance of the standard rate turn (SRT) by student naval helicopter pilots.
Chapman, F; Temme, L A; Still, D L
2001-04-01
During flight training, student naval helicopter pilots learn the use of flight instruments through a prescribed series of simulator training events. The training simulator is a 6-degrees-of-freedom, motion-based, high-fidelity instrument trainer. From the final basic instrument simulator flights of student pilots, we selected for evaluation and analysis their performance of the Standard Rate Turn (SRT), a routine flight maneuver. The performance of the SRT was scored with air speed, altitude and heading average error from target values and standard deviations. These average errors and standard deviations were used in a Multiple Analysis of Variance (MANOVA) to evaluate the effects of three independent variables: 1) direction of turn (left vs. right), 2) degree of turn (180 vs. 360 degrees); and 3) segment of turn (roll-in, first 30 s, last 30 s, and roll-out of turn). Only the main effects of the three independent variables were significant; there were no significant interactions. This result greatly reduces the number of different conditions that should be scored separately for the evaluation of SRT performance. The results also showed that the magnitude of the heading and altitude errors at the beginning of the SRT correlated with the magnitude of the heading and altitude errors throughout the turn. This result suggests that for the turn to be well executed, it is important for it to begin with little error in these two response parameters. The observations reported here should be considered when establishing SRT performance norms and comparing student scores. Furthermore, it seems easier for pilots to maintain good performance than to correct poor performance.
Wagner, Julie A; Tennen, Howard; Feinn, Richard; Osborn, Chandra Y
2015-04-01
We investigated whether self-reported racial discrimination was associated with continuous glucose levels and variability in individuals with diabetes, and whether diabetes distress mediated these associations. Seventy-four Black and White women with type 2 diabetes completed the Experience of Discrimination scale, a measure of lifetime racial discrimination, and the Problem Areas in Diabetes, a measure of diabetes distress. Participants wore a continuous glucose monitor for 24 h after 8 h of fasting, a standard meal, and a 4-h run in period. Higher discrimination predicted higher continuous mean glucose and higher standard deviation of glucose. For both mean and standard deviation of glucose, a race × discrimination interaction indicated a stronger relationship between discrimination and glucose for Whites than for Blacks. Diabetes distress mediated the discrimination-mean glucose relationship. Whites who report discrimination may be uniquely sensitive to distress. These preliminary findings suggest that racial discrimination adversely affects glucose control in women with diabetes, and does so indirectly through diabetes distress. Diabetes distress may be an important therapeutic target to reduce the ill effects of racial discrimination in persons with diabetes.
Gosse, Philippe; Cremer, Antoine; Pereira, Helena; Bobrie, Guillaume; Chatellier, Gilles; Chamontin, Bernard; Courand, Pierre-Yves; Delsart, Pascal; Denolle, Thierry; Dourmap, Caroline; Ferrari, Emile; Girerd, Xavier; Michel Halimi, Jean; Herpin, Daniel; Lantelme, Pierre; Monge, Matthieu; Mounier-Vehier, Claire; Mourad, Jean-Jacques; Ormezzano, Olivier; Ribstein, Jean; Rossignol, Patrick; Sapoval, Marc; Vaïsse, Bernard; Zannad, Faiez; Azizi, Michel
2017-03-01
The DENERHTN trial (Renal Denervation for Hypertension) confirmed the blood pressure (BP) lowering efficacy of renal denervation added to a standardized stepped-care antihypertensive treatment for resistant hypertension at 6 months. We report here the effect of denervation on 24-hour BP and its variability and look for parameters that predicted the BP response. Patients with resistant hypertension were randomly assigned to denervation plus stepped-care treatment or treatment alone (control). Average and standard deviation of 24-hour, daytime, and nighttime BP and the smoothness index were calculated on recordings performed at randomization and 6 months. Responders were defined as a 6-month 24-hour systolic BP reduction ≥20 mm Hg. Analyses were performed on the per-protocol population. The significantly greater BP reduction in the denervation group was associated with a higher smoothness index ( P =0.02). Variability of 24-hour, daytime, and nighttime BP did not change significantly from baseline to 6 months in both groups. The number of responders was greater in the denervation (20/44, 44.5%) than in the control group (11/53, 20.8%; P =0.01). In the discriminant analysis, baseline average nighttime systolic BP and standard deviation were significant predictors of the systolic BP response in the denervation group only, allowing adequate responder classification of 70% of the patients. Our results show that denervation lowers ambulatory BP homogeneously over 24 hours in patients with resistant hypertension and suggest that nighttime systolic BP and variability are predictors of the BP response to denervation. URL: https://www.clinicaltrials.gov. Unique identifier: NCT01570777. © 2017 American Heart Association, Inc.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Determinants of ocular deviation in esotropic subjects under general anesthesia.
Daien, Vincent; Turpin, Chloé; Lignereux, François; Belghobsi, Riadh; Le Meur, Guylene; Lebranchu, Pierre; Pechereau, Alain
2013-01-01
The authors attempted to identify the determinants of ocular deviation in a population of patients with esotropia under general anesthesia. Forty-one patients with esotropia were included. Horizontal ocular deviation was evaluated by the photographic Hirschberg test both in the awakened state and under general anesthesia before surgery. Changes in ocular deviation were measured and a multivariate analysis was used to assess its clinical determinants. The mean age (± standard deviation [SD]) of study subjects was 13 ± 11 years and 51% were females. The mean spherical equivalent refraction of the right eye was 2.44 ± 2.50 diopters (D), with no significant difference between eyes (P = .26). The mean ocular deviation changed significantly, from 33.5 ± 12.5 prism diopters (PD) at preoperative examination to 8.8 ± 11.4 PD under general anesthesia (P = .0001). The changes in ocular deviation positively correlated with the pre-operative ocular deviation (correlation coefficient r = 0.59, P = .0001) and negatively correlated with patient age (correlation coefficient r = -0.53, P = .0001). These two determinants remained significant after multivariate adjustment of the following variables: preoperative ocular deviation; age; gender; spherical equivalent refraction; and number of previous strabismus surgeries (model r(2) = 0.49, P = .0001). The ocular position under general anesthesia was reported as a key factor in the surgical treatment of subjects with esotropia; therefore, its clinical determinants were assessed. The authors observed that preoperative ocular deviation and patient age were the main factors that influenced the ocular position under general anesthesia. Copyright 2013, SLACK Incorporated.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Methodology for the development of normative data for Spanish-speaking pediatric populations.
Rivera, D; Arango-Lasprilla, J C
2017-01-01
To describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries. The study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests. Test-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again. This is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.
Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline
2014-06-01
What is the mean Berg Balance Scale score of healthy elderly people living in the community and how does it vary with age? How much variability in Berg Balance Scale scores is present in groups of healthy elderly people and how does this vary with age? Systematic review with meta-analysis. Any group of healthy community-dwelling people with a mean age of 70 years or greater that has undergone assessment using the Berg Balance Scale. Mean and standard deviations of Berg Balance Scale scores within cohorts of elderly people of known mean age. The search yielded 17 relevant studies contributing data from a total of 1363 participants. The mean Berg Balance Scale scores ranged from 37 to 55 out of a possible maximum score of 56. The standard deviation of Berg Balance Scale scores varied from 1.0 to 9.2. Although participants aged around 70 years had very close to normal Berg Balance Scale scores, there was a significant decline in balance with age at a rate of 0.7 points on the 56-point Berg Balance Scale per year. There was also a strong association between increasing age and increasing variability in balance (R(2) = 0.56, p < 0.001). Healthy community-dwelling elderly people have modest balance deficits, as measured by the Berg Balance Scale, although balance scores deteriorate and become more variable with age. Copyright © 2014. Published by Elsevier B.V.
Cervical vertebral bone mineral density changes in adolescents during orthodontic treatment.
Crawford, Bethany; Kim, Do-Gyoon; Moon, Eun-Sang; Johnson, Elizabeth; Fields, Henry W; Palomo, J Martin; Johnston, William M
2014-08-01
The cervical vertebral maturation (CVM) stages have been used to estimate facial growth status. In this study, we examined whether cone-beam computed tomography images can be used to detect changes of CVM-related parameters and bone mineral density distribution in adolescents during orthodontic treatment. Eighty-two cone-beam computed tomography images were obtained from 41 patients before (14.47 ± 1.42 years) and after (16.15 ± 1.38 years) orthodontic treatment. Two cervical vertebral bodies (C2 and C3) were digitally isolated from each image, and their volumes, means, and standard deviations of gray-level histograms were measured. The CVM stages and mandibular lengths were also estimated after converting the cone-beam computed tomography images. Significant changes for the examined variables were detected during the observation period (P ≤0.018) except for C3 vertebral body volume (P = 0.210). The changes of CVM stage had significant positive correlations with those of vertebral body volume (P ≤0.021). The change of the standard deviation of bone mineral density (variability) showed significant correlations with those of vertebral body volume and mandibular length for C2 (P ≤0.029). The means and variability of the gray levels account for bone mineral density and active remodeling, respectively. Our results indicate that bone mineral density distribution and the volume of the cervical vertebral body changed because of active bone remodeling during maturation. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Analysis of variability of tropical Pacific sea surface temperatures
NASA Astrophysics Data System (ADS)
Davies, Georgina; Cressie, Noel
2016-11-01
Sea surface temperature (SST) in the Pacific Ocean is a key component of many global climate models and the El Niño-Southern Oscillation (ENSO) phenomenon. We shall analyse SST for the period November 1981-December 2014. To study the temporal variability of the ENSO phenomenon, we have selected a subregion of the tropical Pacific Ocean, namely the Niño 3.4 region, as it is thought to be the area where SST anomalies indicate most clearly ENSO's influence on the global atmosphere. SST anomalies, obtained by subtracting the appropriate monthly averages from the data, are the focus of the majority of previous analyses of the Pacific and other oceans' SSTs. Preliminary data analysis showed that not only Niño 3.4 spatial means but also Niño 3.4 spatial variances varied with month of the year. In this article, we conduct an analysis of the raw SST data and introduce diagnostic plots (here, plots of variability vs. central tendency). These plots show strong negative dependence between the spatial standard deviation and the spatial mean. Outliers are present, so we consider robust regression to obtain intercept and slope estimates for the 12 individual months and for all-months-combined. Based on this mean-standard deviation relationship, we define a variance-stabilizing transformation. On the transformed scale, we describe the Niño 3.4 SST time series with a statistical model that is linear, heteroskedastic, and dynamical.
Martinez, Sydney A; Beebe, Laura A; Thompson, David M; Wagener, Theodore L; Terrell, Deirdra R; Campbell, Janis E
2018-01-01
The inverse association between socioeconomic status and smoking is well established, yet the mechanisms that drive this relationship are unclear. We developed and tested four theoretical models of the pathways that link socioeconomic status to current smoking prevalence using a structural equation modeling (SEM) approach. Using data from the 2013 National Health Interview Survey, we selected four indicator variables (poverty ratio, personal earnings, educational attainment, and employment status) that we hypothesize underlie a latent variable, socioeconomic status. We measured direct, indirect, and total effects of socioeconomic status on smoking on four pathways through four latent variables representing social cohesion, financial strain, sleep disturbance, and psychological distress. Results of the model indicated that the probability of being a smoker decreased by 26% of a standard deviation for every one standard deviation increase in socioeconomic status. The direct effects of socioeconomic status on smoking accounted for the majority of the total effects, but the overall model also included significant indirect effects. Of the four mediators, sleep disturbance and psychological distress had the largest total effects on current smoking. We explored the use of structural equation modeling in epidemiology to quantify effects of socioeconomic status on smoking through four social and psychological factors to identify potential targets for interventions. A better understanding of the complex relationship between socioeconomic status and smoking is critical as we continue to reduce the burden of tobacco and eliminate health disparities related to smoking.
Beebe, Laura A.; Thompson, David M.; Wagener, Theodore L.; Terrell, Deirdra R.; Campbell, Janis E.
2018-01-01
The inverse association between socioeconomic status and smoking is well established, yet the mechanisms that drive this relationship are unclear. We developed and tested four theoretical models of the pathways that link socioeconomic status to current smoking prevalence using a structural equation modeling (SEM) approach. Using data from the 2013 National Health Interview Survey, we selected four indicator variables (poverty ratio, personal earnings, educational attainment, and employment status) that we hypothesize underlie a latent variable, socioeconomic status. We measured direct, indirect, and total effects of socioeconomic status on smoking on four pathways through four latent variables representing social cohesion, financial strain, sleep disturbance, and psychological distress. Results of the model indicated that the probability of being a smoker decreased by 26% of a standard deviation for every one standard deviation increase in socioeconomic status. The direct effects of socioeconomic status on smoking accounted for the majority of the total effects, but the overall model also included significant indirect effects. Of the four mediators, sleep disturbance and psychological distress had the largest total effects on current smoking. We explored the use of structural equation modeling in epidemiology to quantify effects of socioeconomic status on smoking through four social and psychological factors to identify potential targets for interventions. A better understanding of the complex relationship between socioeconomic status and smoking is critical as we continue to reduce the burden of tobacco and eliminate health disparities related to smoking. PMID:29408939
Paretti, Nicholas; Coes, Alissa L.; Kephart, Christopher M.; Mayo, Justine
2018-03-05
Tumacácori National Historical Park protects the culturally important Mission, San José de Tumacácori, while also managing a portion of the ecologically diverse riparian corridor of the Santa Cruz River. This report describes the methods and quality assurance procedures used in the collection of water samples for the analysis of Escherichia coli (E. coli), microbial source tracking markers, suspended sediment, water-quality parameters, turbidity, and the data collection for discharge and stage; the process for data review and approval is also described. Finally, this report provides a quantitative assessment of the quality of the E. coli, microbial source tracking, and suspended sediment data.The data-quality assessment revealed that bias attributed to field and laboratory contamination was minimal, with E. coli detections in only 3 out of 33 field blank samples analyzed. Concentrations in the field blanks were several orders of magnitude lower than environmental concentrations. The microbial source tracking (MST) field blank was below the detection limit for all MST markers analyzed. Laboratory blanks for E. coli at the USGS Arizona Water Science Center and laboratory blanks for MST markers at the USGS Ohio Water Microbiology Laboratory were all below the detection limit. Irreplicate data for E. coli and suspended sediment indicated that bias was not introduced to the data by combining samples collected using discrete sampling methods with samples collected using automatic sampling methods.The split and sequential E. coli replicate data showed consistent analytical variability and a single equation was developed to explain the variability of E. coli concentrations. An additional analysis of analytical variability for E. coli indicated analytical variability around 18 percent relative standard deviation and no trend was observed in the concentration during the processing and analysis of multiple split-replicates. Two replicate samples were collected for MST and individual markers were compared for a base flow and flood sample. For the markers found in common between the two types of samples, the relative standard deviation for the base flow sample was more than 3 times greater than the markers in the flood sample. Sequential suspended sediment replicates had a relative standard deviation of about 1.3 percent, indicating that environmental and analytical variability was minimal.A holding time review and laboratory study analysis supported the extended holding times required for this investigation. Most concentrations for flood and base-flow samples were within the theoretical variability specified in the most probable number approach suggesting that extended hold times did not overly influence the final concentrations reported.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-11-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-09-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Su, Yushan; Hung, Hayley
2010-11-01
Measurements of semi-volatile organic chemicals (SVOCs) were compared among 21 laboratories from 7 countries through the analysis of standards, a blind sample, an air extract, and an atmospheric dust sample. Measurement accuracy strongly depended on analytes, laboratories, and types of standards and samples. Intra-laboratory precision was generally good with relative standard deviations (RSDs) of triplicate injections <10% and with median differences of duplicate samples between 2.1 and 22%. Inter-laboratory variability, measured by RSDs of all measurements, was in the range of 2.8-58% in analyzing standards, and 6.9-190% in analyzing blind sample and air extract. Inter-laboratory precision was poorer when samples were subject to cleanup processes, or when SVOCs were quantified at low concentrations. In general, inter-laboratory differences up to a factor of 2 can be expected to analyze atmospheric SVOCs. When comparing air measurements from different laboratories, caution should be exercised if the data variability is less than the inter-laboratory differences. 2010. Published by Elsevier Ltd. All rights reserved.
The Variability of Internal Tides in the Northern South China Sea
2013-08-27
mean N(z) profile from the climatology dataset provided by the Generalized Digital Environmental Model ( GDEM ) (Teague et al. 1990) (Fig. 2). Eigenmode...decomposed eigenmodes have similar magnitude. The GDEM profiles for the eigenmode decomposition are used for this analysis because the profiles from...provided by the Generalized Digital Environmental Model ( GDEM ) and the shading represents one standard deviation. b Vertical structures of the first
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
Doblas, Sabrina; Almeida, Gilberto S; Blé, François-Xavier; Garteiser, Philippe; Hoff, Benjamin A; McIntyre, Dominick J O; Wachsmuth, Lydia; Chenevert, Thomas L; Faber, Cornelius; Griffiths, John R; Jacobs, Andreas H; Morris, David M; O'Connor, James P B; Robinson, Simon P; Van Beers, Bernard E; Waterton, John C
2015-12-01
To evaluate between-site agreement of apparent diffusion coefficient (ADC) measurements in preclinical magnetic resonance imaging (MRI) systems. A miniaturized thermally stable ice-water phantom was devised. ADC (mean and interquartile range) was measured over several days, on 4.7T, 7T, and 9.4T Bruker, Agilent, and Magnex small-animal MRI systems using a common protocol across seven sites. Day-to-day repeatability was expressed as percent variation of mean ADC between acquisitions. Cross-site reproducibility was expressed as 1.96 × standard deviation of percent deviation of ADC values. ADC measurements were equivalent across all seven sites with a cross-site ADC reproducibility of 6.3%. Mean day-to-day repeatability of ADC measurements was 2.3%, and no site was identified as presenting different measurements than others (analysis of variance [ANOVA] P = 0.02, post-hoc test n.s.). Between-slice ADC variability was negligible and similar between sites (P = 0.15). Mean within-region-of-interest ADC variability was 5.5%, with one site presenting a significantly greater variation than the others (P = 0.0013). Absolute ADC values in preclinical studies are comparable between sites and equipment, provided standardized protocols are employed. © 2015 Wiley Periodicals, Inc.
Visual photometry: accuracy and precision
NASA Astrophysics Data System (ADS)
Whiting, Alan
2018-01-01
Visual photometry, estimation by eye of the brightness of stars, remains an important source of data even in the age of widespread precision instruments. However, the eye-brain system differs from electronic detectors and its results may be expected to differ in several respects. I examine a selection of well-observed variables from the AAVSO database to determine several internal characteristics of this data set. Visual estimates scatter around the fitted curves with a standard deviation of 0.14 to 0.34 magnitudes, most clustered in the 0.21-0.25 range. The variation of the scatter does not seem to correlate with color, type of variable, or depth or speed of variation of the star’s brightness. The scatter of an individual observer’s observations changes from star to star, in step with the overall scatter. The shape of the deviations from the fitted curve is non-Gaussian, with positive excess kurtosis (more outlying observations). These results have implications for use of visual data, as well as other citizen science efforts.
NASA Astrophysics Data System (ADS)
Majumdar, Paulami; Greeley, Jeffrey
2018-04-01
Linear scaling relations of adsorbate energies across a range of catalytic surfaces have emerged as a central interpretive paradigm in heterogeneous catalysis. They are, however, typically developed for low adsorbate coverages which are not always representative of realistic heterogeneous catalytic environments. Herein, we present generalized linear scaling relations on transition metals that explicitly consider adsorbate-coadsorbate interactions at variable coverages. The slopes of these scaling relations do not follow the simple bond counting principles that govern scaling on transition metals at lower coverages. The deviations from bond counting are explained using a pairwise interaction model wherein the interaction parameter determines the slope of the scaling relationship on a given metal at variable coadsorbate coverages, and the slope across different metals at fixed coadsorbate coverage is approximated by adding a coverage-dependent correction to the standard bond counting contribution. The analysis provides a compact explanation for coverage-dependent deviations from bond counting in scaling relationships and suggests a useful strategy for incorporation of coverage effects into catalytic trends studies.
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
Baumrind, S; Korn, E L; Ben-Bassat, Y; West, E E
1987-01-01
Lateral skull radiographs for a set of 31 human subjects were examined using computer-aided methods in an attempt to quantify modal trends of maxillary remodeling during the mixed dentition and adolescent growth periods. Cumulative changes in position of anterior nasal spine (ANS), posterior nasal spine (PNS), and Point A are reported at annual intervals relative to superimposition on previously placed maxillary metallic implants. This in vivo longitudinal study confirms at a high level of confidence earlier findings by Enlow, Björk, Melsen, and others to the effect that the superior surface of the maxilla remodels downward during the period of growth and development being investigated. However, the inter-individual variability is relatively large, the mean magnitudes of change are relatively small, and the rate of change appears to diminish by 13.5 years. For the 19 subjects for whom data were available for the time interval from 8.5 to 15.5 years, mean downward remodeling at PNS was 2.50 mm with a standard deviation of 2.23 mm. At ANS, corresponding mean value was 1.56 mm with a standard deviation of 2.92 mm. Mean rotation of the ANS-PNS line relative to the implant line was 1.1 degree in the "forward" direction. However, this rotational change was particularly variable with a standard deviation of 4.6 degrees and a range of 11.3 degrees "forward" to 6.7 degrees "backward." The study provides strong evidence that the palate elongates anteroposteriorly mainly by the backward remodeling of structures located posterior to the region in which the implants were placed. There is also evidence that supports the idea of modal resorptive remodeling at ANS and PNS, but here the data are somewhat more equivocal. It appears likely, but not certain, that there are real differences in the modal patterns of remodeling between treated and untreated subjects. Because of problems associated with overfragmentation of the sample, sex differences were not investigated.
Effects of climate change and variability on population dynamics in a long-lived shorebird.
van de Pol, Martijn; Vindenes, Yngvild; Saether, Bernt-Erik; Engen, Steinar; Ens, Bruno J; Oosterbeek, Kees; Tinbergen, Joost M
2010-04-01
Climate change affects both the mean and variability of climatic variables, but their relative impact on the dynamics of populations is still largely unexplored. Based on a long-term study of the demography of a declining Eurasian Oystercatcher (Haematopus ostralegus) population, we quantify the effect of changes in mean and variance of winter temperature on different vital rates across the life cycle. Subsequently, we quantify, using stochastic stage-structured models, how changes in the mean and variance of this environmental variable affect important characteristics of the future population dynamics, such as the time to extinction. Local mean winter temperature is predicted to strongly increase, and we show that this is likely to increase the population's persistence time via its positive effects on adult survival that outweigh the negative effects that higher temperatures have on fecundity. Interannual variation in winter temperature is predicted to decrease, which is also likely to increase persistence time via its positive effects on adult survival that outweigh the negative effects that lower temperature variability has on fecundity. Overall, a 0.1 degrees C change in mean temperature is predicted to alter median time to extinction by 1.5 times as many years as would a 0.1 degrees C change in the standard deviation in temperature, suggesting that the dynamics of oystercatchers are more sensitive to changes in the mean than in the interannual variability of this climatic variable. Moreover, as climate models predict larger changes in the mean than in the standard deviation of local winter temperature, the effects of future climatic variability on this population's time to extinction are expected to be overwhelmed by the effects of changes in climatic means. We discuss the mechanisms by which climatic variability can either increase or decrease population viability and how this might depend both on species' life histories and on the vital rates affected. This study illustrates that, for making reliable inferences about population consequences in species in which life history changes with age or stage, it is crucial to investigate the impact of climate change on vital rates across the entire life cycle. Disturbingly, such data are unavailable for most species of conservation concern.
1979-01-01
UJ Q S TD . M E A + go •" * \\ <I oc OO • • p — so * + o»- CM fX S:z 5 QQ • • • II UJ »— * £-< 1 o_ a. • V...UJ Q S TD . M E A • • . • o c> 00 LU o z < has o I Ik,** •• •v.iA o CO > o tz oo u £ o CO ZJ 3 I/O < UJ...of Results Standard Deviat ion 99% Confidence Interval Gun Model DFC Variable Velocity % Standard Va 0.23-0.58 lue % Standard Value 175-iran
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
Within-field variability of plant and soil parameters
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Brisco, B.; Dobson, C.
1981-01-01
The variability of ground truth data collected for vegetation experiments was investigated. Two fields of wheat and one field of corn were sampled on two different dates. The variability of crop and soil parameters within a field, between two fields of the same type, and within a field over time were compared statistically. The number of samples from each test site required in order to be able to determine with confidence the mean and standard deviations for a given variable was determined. Eight samples were found to be adequate for plant height determinations, while twenty samples were required for plant moisture and soil moisture characterization. Eighteen samples were necessary for detecting within field variability over time and for between field variability for the same crop. The necessary sample sites vary according to the physiological growth stage of the crop and recent weather events that affect the moisture and/or height characteristics of the field in question.
Gene expression variability in human hepatic drug metabolizing enzymes and transporters.
Yang, Lun; Price, Elvin T; Chang, Ching-Wei; Li, Yan; Huang, Ying; Guo, Li-Wu; Guo, Yongli; Kaput, Jim; Shi, Leming; Ning, Baitang
2013-01-01
Interindividual variability in the expression of drug-metabolizing enzymes and transporters (DMETs) in human liver may contribute to interindividual differences in drug efficacy and adverse reactions. Published studies that analyzed variability in the expression of DMET genes were limited by sample sizes and the number of genes profiled. We systematically analyzed the expression of 374 DMETs from a microarray data set consisting of gene expression profiles derived from 427 human liver samples. The standard deviation of interindividual expression for DMET genes was much higher than that for non-DMET genes. The 20 DMET genes with the largest variability in the expression provided examples of the interindividual variation. Gene expression data were also analyzed using network analysis methods, which delineates the similarities of biological functionalities and regulation mechanisms for these highly variable DMET genes. Expression variability of human hepatic DMET genes may affect drug-gene interactions and disease susceptibility, with concomitant clinical implications.
Assessing the optimality of ASHRAE climate zones using high resolution meteorological data sets
NASA Astrophysics Data System (ADS)
Fils, P. D.; Kumar, J.; Collier, N.; Hoffman, F. M.; Xu, M.; Forbes, W.
2017-12-01
Energy consumed by built infrastructure constitutes a significant fraction of the nation's energy budget. According to 2015 US Energy Information Agency report, 41% of the energy used in the US was going to residential and commercial buildings. Additional research has shown that 32% of commercial building energy goes into heating and cooling the building. The American National Standards Institute and the American Society of Heating Refrigerating and Air-Conditioning Engineers Standard 90.1 provides climate zones for current state-of-practice since heating and cooling demands are strongly influenced by spatio-temporal weather variations. For this reason, we have been assessing the optimality of the climate zones using high resolution daily climate data from NASA's DAYMET database. We analyzed time series of meteorological data sets for all ASHRAE climate zones between 1980-2016 inclusively. We computed the mean, standard deviation, and other statistics for a set of meteorological variables (solar radiation, maximum and minimum temperature)within each zone. By plotting all the zonal statistics, we analyzed patterns and trends in those data over the past 36 years. We compared the means of each zone to its standard deviation to determine the range of spatial variability that exist within each zone. If the band around the mean is too large, it indicates that regions in the zone experience a wide range of weather conditions and perhaps a common set of building design guidelines would lead to a non-optimal energy consumption scenario. In this study we have observed a strong variation in the different climate zones. Some have shown consistent patterns in the past 36 years, indicating that the zone was well constructed, while others have greatly deviated from their mean indicating that the zone needs to be reconstructed. We also looked at redesigning the climate zones based on high resolution climate data. We are using building simulations models like EnergyPlus to develop optimal energy guidelines for each climate zone and quantify potential energy savings that can be realized by redesigning climate zones using state-of-the art data sets.
Oceanographic and meteorological research based on the data products of SEASAT
NASA Technical Reports Server (NTRS)
Pierson, W. J. (Principal Investigator)
1983-01-01
De-aliased SEASAT SASS vector winds obtained during the GOASEX (Gulf of Alaska SEASAT Experiment) program were processed to obtain superobservations centered on a one degree by one degree grid. The results provide values for the combined effects of mesoscale variability and communication noise on the individual SASS winds. Each grid point of the synoptic field provides the mean synoptic east-west and north-south wind components plus estimates of the standard deviations of these means. These superobservations winds are then processed further to obtain synoptic scale vector winds stress fiels, the horizontal divergence of the wind, the curl of the wind stress and the vertical velocity at 200 m above the sea surface, each with appropriate standard deviations for each grid point value. The resulting fields appear to be consistant over large distances and to agree with, for example, geostationary cloud images obtained concurrently. Their quality is far superior to that of analyses based on conventional data.
NASA Technical Reports Server (NTRS)
Sanders, W. A.; Baaklini, G. Y.
1986-01-01
A sintered Si3N4-SiO2-Y2O3 composition, NASA 6Y, was developed that reached four-point flexural average strength/standard deviation values of 857/36, 544/33, and 462/59 MPa at room temperature, 1200 and 1370 C respectively. These strengths represented improvements of 56, 38, and 21 percent over baseline properties at the three test temperatures. At room temperature the standard deviation was reduced by over a factor of three. These accomplishments were realized by the iterative utilization of conventional x-radiography to characterize structural (density) uniformity as affected by systematic changes in powder processing and sintering parameters. Accompanying the improvement in mechanical properties was a change in the type of flaw causing failure from a pore to a large columnar beta- Si3N4 grain typically 40 to 80 microns long, 10 to 30 microns wide, and with an aspect ratio of 5:1.
NASA Astrophysics Data System (ADS)
Kürbis, K.; Mudelsee, M.; Tetzlaff, G.; Brázdil, R.
2009-09-01
For the analysis of trends in weather extremes, we introduce a diagnostic index variable, the exceedance product, which combines intensity and frequency of extremes. We separate trends in higher moments from trends in mean or standard deviation and use bootstrap resampling to evaluate statistical significances. The application of the concept of the exceedance product to daily meteorological time series from Potsdam (1893 to 2005) and Prague-Klementinum (1775 to 2004) reveals that extremely cold winters occurred only until the mid-20th century, whereas warm winters show upward trends. These changes were significant in higher moments of the temperature distribution. In contrast, trends in summer temperature extremes (e.g., the 2003 European heatwave) can be explained by linear changes in mean or standard deviation. While precipitation at Potsdam does not show pronounced trends, dew point does exhibit a change from maximum extremes during the 1960s to minimum extremes during the 1970s.
Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1994-01-01
The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Zhang, Zhanhai; Hu, Yongyun; Chen, Liqi; Dai, Yongjiu; Ren, Xiaobo
2008-05-01
The surface air temperature (SAT) over the Arctic Ocean in reanalyses and global climate model simulations was assessed using the International Arctic Buoy Programme/Polar Exchange at the Sea Surface (IABP/POLES) observations for the period 1979-1999. The reanalyses, including the National Centers for Environmental Prediction Reanalysis II (NCEP2) and European Centre for Medium-Range Weather Forecast 40-year Reanalysis (ERA40), show encouraging agreement with the IABP/POLES observations, although some spatiotemporal discrepancies are noteworthy. The reanalyses have warm annual mean biases and underestimate the observed interannual SAT variability in summer. Additionally, NCEP2 shows an excessive warming trend. Most model simulations (coordinated by the International Panel on Climate Change for its Fourth Assessment Report) reproduce the annual mean, seasonal cycle, and trend of the observed SAT reasonably well, particularly the multi-model ensemble mean. However, large discrepancies are found. Some models have the annual mean SAT biases far exceeding the standard deviation of the observed interannul SAT variability and the across-model standard deviation. Spatially, the largest inter-model variance of the annual mean SAT is found over the North Pole, Greenland Sea, Barents Sea and Baffin Bay. Seasonally, a large spread of the simulated SAT among the models is found in winter. The models show interannual variability and decadal trend of various amplitudes, and can not capture the observed dominant SAT mode variability and cooling trend in winter. Further discussions of the possible attributions to the identified SAT errors for some models suggest that the model's performance in the sea ice simulation is an important factor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, Thomas E.; Maddalena, Randy L.
2007-01-01
The role of terrestrial vegetation in transferring chemicals from soil and air into specific plant tissues (stems, leaves, roots, etc.) is still not well characterized. We provide here a critical review of plant-to-soil bioconcentration ratio (BCR) estimates based on models and experimental data. This review includes the conceptual and theoretical formulations of the bioconcentration ratio, constructing and calibrating empirical and mathematical algorithms to describe this ratio and the experimental data used to quantify BCRs and calibrate the model performance. We first evaluate the theoretical basis for the BCR concept and BCR models and consider how lack of knowledge and datamore » limits reliability and consistency of BCR estimates. We next consider alternate modeling strategies for BCR. A key focus of this evaluation is the relative contributions to overall uncertainty from model uncertainty versus variability in the experimental data used to develop and test the models. As a case study, we consider a single chemical, hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), and focus on variability of bioconcentration measurements obtained from 81 experiments with different plant species, different plant tissues, different experimental conditions, and different methods for reporting concentrations in the soil and plant tissues. We use these observations to evaluate both the magnitude of experimental variability in plant bioconcentration and compare this to model uncertainty. Among these 81 measurements, the variation of the plant/soil BCR has a geometric standard deviation (GSD) of 3.5 and a coefficient of variability (CV-ratio of arithmetic standard deviation to mean) of 1.7. These variations are significant but low relative to model uncertainties--which have an estimated GSD of 10 with a corresponding CV of 14.« less
Sarfraz, Muhammad Haroon; Mehboob, Mohammad Asim; Haq, Rana Intisar Ul
2017-01-01
To evaluate the correlation between Central Corneal Thickness (CCT) and Visual Field (VF) defect parameters like Mean Deviation (MD) and Pattern Standard Deviation (PSD), Cup-to-Disc Ratio (CDR) and Retinal Nerve Fibre Layer Thickness (RNFL-T) in Primary Open-Angle Glaucoma (POAG) patients. This cross sectional study was conducted at Armed Forces Institute of Ophthalmology (AFIO), Rawalpindi from September 2015 to September 2016. Sixty eyes of 30 patients with diagnosed POAG were analysed. Correlation of CCT with other variables was studied. Mean age of study population was 43.13±7.54 years. Out of 30 patients, 19 (63.33%) were males and 11 (36.67%) were females. Mean CCT, MD, PSD, CDR and RNFL-T of study population was 528.57±25.47µm, -9.11±3.07, 6.93±2.73, 0.63±0.13 and 77.79±10.44µm respectively. There was significant correlation of CCT with MD, PSD and CDR (r=-0.52, p<0.001; r=-0.59, p<0.001;r=-0.41, p=0.001 respectively). The correlation of CCT with RNFL-T was not statistically significant (r=-0.14, p=0.284). Central corneal thickness had significant correlation with visual field parameters like mean deviation and pattern standard deviation, as well as with cup-to-disc ratio. However, central corneal thickness had no significant relationship with retinal nerve fibre layer thickness.
Comparison of heart rate variability and pulse rate variability detected with photoplethysmography
NASA Astrophysics Data System (ADS)
Rauh, Robert; Limley, Robert; Bauer, Rainer-Dieter; Radespiel-Troger, Martin; Mueck-Weymann, Michael
2004-08-01
This study compares ear photoplethysmography (PPG) and electrocardiogram (ECG) in providing accurate heart beat intervals for use in calculations of heart rate variability (HRV, from ECG) or of pulse rate variability (PRV, from PPG) respectively. Simultaneous measurements were taken from 44 healthy subjects at rest during spontaneous breathing and during forced metronomic breathing (6/min). Under both conditions, highly significant (p > 0.001) correlations (1.0 > r > 0.97) were found between all evaluated common HRV and PRV parameters. However, under both conditions the PRV parameters were higher than HRV. In addition, we calculated the limits of agreement according to Bland and Altman between both techniques and found good agreement (< 10% difference) for heart rate and standard deviation of normal-to-normal intervals (SDNN), but only moderate (10-20%) or even insufficient (> 20%) agreement for other standard HRV and PRV parameters. Thus, PRV data seem to be acceptable for screening purposes but, at least at this state of knowledge, not for medical decision making. However, further studies are needed before more certain determination can be made.
On the internal target model in a tracking task
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Baron, S.
1981-01-01
An optimal control model for predicting operator's dynamic responses and errors in target tracking ability is summarized. The model, which predicts asymmetry in the tracking data, is dependent on target maneuvers and trajectories. Gunners perception, decision making, control, and estimate of target positions and velocity related to crossover intervals are discussed. The model provides estimates for means, standard deviations, and variances for variables investigated and for operator estimates of future target positions and velocities.
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...
2016-10-18
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
Visualizing the Sample Standard Deviation
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
Accuracy and precision of Legionella isolation by US laboratories in the ELITE program pilot study.
Lucas, Claressa E; Taylor, Thomas H; Fields, Barry S
2011-10-01
A pilot study for the Environmental Legionella Isolation Techniques Evaluation (ELITE) Program, a proficiency testing scheme for US laboratories that culture Legionella from environmental samples, was conducted September 1, 2008 through March 31, 2009. Participants (n=20) processed panels consisting of six sample types: pure and mixed positive, pure and mixed negative, pure and mixed variable. The majority (93%) of all samples (n=286) were correctly characterized, with 88.5% of samples positive for Legionella and 100% of negative samples identified correctly. Variable samples were incorrectly identified as negative in 36.9% of reports. For all samples reported positive (n=128), participants underestimated the cfu/ml by a mean of 1.25 logs with standard deviation of 0.78 logs, standard error of 0.07 logs, and a range of 3.57 logs compared to the CDC re-test value. Centering results around the interlaboratory mean yielded a standard deviation of 0.65 logs, standard error of 0.06 logs, and a range of 3.22 logs. Sampling protocol, treatment regimen, culture procedure, and laboratory experience did not significantly affect the accuracy or precision of reported concentrations. Qualitative and quantitative results from the ELITE pilot study were similar to reports from a corresponding proficiency testing scheme available in the European Union, indicating these results are probably valid for most environmental laboratories worldwide. The large enumeration error observed suggests that the need for remediation of a water system should not be determined solely by the concentration of Legionella observed in a sample since that value is likely to underestimate the true level of contamination. Published by Elsevier Ltd.
Design, analysis, and interpretation of field quality-control data for water-sampling projects
Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.
2015-01-01
The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2016-03-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Entropy: A new measure of stock market volatility?
NASA Astrophysics Data System (ADS)
Bentes, Sonia R.; Menezes, Rui
2012-11-01
When uncertainty dominates understanding stock market volatility is vital. There are a number of reasons for that. On one hand, substantial changes in volatility of financial market returns are capable of having significant negative effects on risk averse investors. In addition, such changes can also impact on consumption patterns, corporate capital investment decisions and macroeconomic variables. Arguably, volatility is one of the most important concepts in the whole finance theory. In the traditional approach this phenomenon has been addressed based on the concept of standard-deviation (or variance) from which all the famous ARCH type models - Autoregressive Conditional Heteroskedasticity Models- depart. In this context, volatility is often used to describe dispersion from an expected value, price or model. The variability of traded prices from their sample mean is only an example. Although as a measure of uncertainty and risk standard-deviation is very popular since it is simple and easy to calculate it has long been recognized that it is not fully satisfactory. The main reason for that lies in the fact that it is severely affected by extreme values. This may suggest that this is not a closed issue. Bearing on the above we might conclude that many other questions might arise while addressing this subject. One of outstanding importance, from which more sophisticated analysis can be carried out, is how to evaluate volatility, after all? If the standard-deviation has some drawbacks shall we still rely on it? Shall we look for an alternative measure? In searching for this shall we consider the insight of other domains of knowledge? In this paper we specifically address if the concept of entropy, originally developed in physics by Clausius in the XIX century, which can constitute an effective alternative. Basically, what we try to understand is, which are the potentialities of entropy compared to the standard deviation. But why entropy? The answer lies on the fact that there is already some research on the domain of Econophysics, which points out that as a measure of disorder, distance from equilibrium or even ignorance, entropy might present some advantages. However another question arises: since there is several measures of entropy which one since there are several measures of entropy, which one shall be used? As a starting point we discuss the potentialities of Shannon entropy and Tsallis entropy. The main difference between them is that both Renyi and Tsallis are adequate for anomalous systems while Shannon has revealed optimal for equilibrium systems.
Down-Looking Interferometer Study II, Volume I,
1980-03-01
g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
Schrems, W A; Laemmer, R; Hoesl, L M; Horn, F K; Mardin, C Y; Kruse, F E; Tornow, R P
2011-10-01
To investigate the influence of atypical retardation pattern (ARP) on the distribution of peripapillary retinal nerve fibre layer (RNFL) thickness measured with scanning laser polarimetry in healthy individuals and to compare these results with RNFL thickness from spectral domain optical coherence tomography (OCT) in the same subjects. 120 healthy subjects were investigated in this study. All volunteers received detailed ophthalmological examination, GDx variable corneal compensation (VCC) and Spectralis-OCT. The subjects were divided into four subgroups according to their typical scan score (TSS): very typical with TSS=100, typical with 99 ≥ TSS ≥ 91, less typical with 90 ≥ TSS ≥ 81 and atypical with TSS ≤ 80. Deviations from very typical normal values were calculated for 32 sectors for each group. There was a systematic variation of the RNFL thickness deviation around the optic nerve head in the atypical group for the GDxVCC results. The highest percentage deviation of about 96% appeared temporal with decreasing deviation towards the superior and inferior sectors, and nasal sectors exhibited a deviation of 30%. Percentage deviations from very typical RNFL values decreased with increasing TSS. No systematic variation could be found if the RNFL thickness deviation between different TSS-groups was compared with the OCT results. The ARP has a major impact on the peripapillary RNFL distribution assessed by GDx VCC; thus, the TSS should be included in the standard printout.
Characterizing Accuracy and Precision of Glucose Sensors and Meters
2014-01-01
There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194
NASA Astrophysics Data System (ADS)
Singh, Gurjeet; Panda, Rabindra K.; Mohanty, Binayak P.; Jana, Raghavendra B.
2016-05-01
Strategic ground-based sampling of soil moisture across multiple scales is necessary to validate remotely sensed quantities such as NASA's Soil Moisture Active Passive (SMAP) product. In the present study, in-situ soil moisture data were collected at two nested scale extents (0.5 km and 3 km) to understand the trend of soil moisture variability across these scales. This ground-based soil moisture sampling was conducted in the 500 km2 Rana watershed situated in eastern India. The study area is characterized as sub-humid, sub-tropical climate with average annual rainfall of about 1456 mm. Three 3x3 km square grids were sampled intensively once a day at 49 locations each, at a spacing of 0.5 km. These intensive sampling locations were selected on the basis of different topography, soil properties and vegetation characteristics. In addition, measurements were also made at 9 locations around each intensive sampling grid at 3 km spacing to cover a 9x9 km square grid. Intensive fine scale soil moisture sampling as well as coarser scale samplings were made using both impedance probes and gravimetric analyses in the study watershed. The ground-based soil moisture samplings were conducted during the day, concurrent with the SMAP descending overpass. Analysis of soil moisture spatial variability in terms of areal mean soil moisture and the statistics of higher-order moments, i.e., the standard deviation, and the coefficient of variation are presented. Results showed that the standard deviation and coefficient of variation of measured soil moisture decreased with extent scale by increasing mean soil moisture.
NASA Astrophysics Data System (ADS)
Stone, H. B.; Banas, N. S.; Hickey, B. M.; MacCready, P.
2016-02-01
The Pacific Northwest coast is an unusually productive area with a strong river influence and highly variable upwelling-favorable and downwelling-favorable winds, but recent trends in hypoxia and ocean acidification in this region are troubling to both scientists and the general public. A new ROMS hindcast model of this region makes possible a study of interannual variability. This study of the interannual temperature and salinity variability on the Pacific Northwest coast is conducted using a coastal hindcast model (43°N - 50°N) spanning 2002-2009 from the University of Washington Coastal Modeling Group, with a resolution of 1.5 km over the shelf and slope. Analysis of hindcast model results was used to assess the relative importance of source water variability, including the poleward California Undercurrent, local and remote wind forcing, winter wind-driven mixing, and river influence in explaining the interannual variations in the shelf bottom layer (40 - 80 m depth, 10 m thick) and over the slope (150 - 250 m depth, <100 km from shelf break) at each latitude within the model domain. Characterized through tracking of the fraction of Pacific Equatorial Water (PEW) relative to Pacific Subarctic Upper Water (PSUW) present on the slope, slope water properties at all latitudes varied little throughout the time series, with the largest variability due to patterns of large north-south advection of water masses over the slope. Over the time series, the standard deviation of slope temperature was 0.09 ˚C, while slope salinity standard deviation was 0.02 psu. Results suggest that shelf bottom water interannual variability is not driven primarily by interannual variability in slope water as shelf bottom water temperature and salinity vary nearly 10 times more than those over the slope. Instead, interannual variability in shelf bottom water properties is likely driven by other processes, such as local and remote wind forcing, and winter wind-driven mixing. The relative contributions of these processes to interannual variability in shelf bottom water properties will be addressed. Overall, these results highlight the importance of shelf processes relative to large-scale influences on the interannual timescale in particular. Implications for variability in hypoxia and ocean acidification impacts will be discussed.
Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N
2016-06-01
When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.
Uncertainty Analysis of Downscaled CMIP5 Precipitation Data for Louisiana, USA
NASA Astrophysics Data System (ADS)
Sumi, S. J.; Tamanna, M.; Chivoiu, B.; Habib, E. H.
2014-12-01
The downscaled CMIP3 and CMIP5 Climate and Hydrology Projections dataset contains fine spatial resolution translations of climate projections over the contiguous United States developed using two downscaling techniques (monthly Bias Correction Spatial Disaggregation (BCSD) and daily Bias Correction Constructed Analogs (BCCA)). The objective of this study is to assess the uncertainty of the CMIP5 downscaled general circulation models (GCM). We performed an analysis of the daily, monthly, seasonal and annual variability of precipitation downloaded from the Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections website for the state of Louisiana, USA at 0.125° x 0.125° resolution. A data set of daily gridded observations of precipitation of a rectangular boundary covering Louisiana is used to assess the validity of 21 downscaled GCMs for the 1950-1999 period. The following statistics are computed using the CMIP5 observed dataset with respect to the 21 models: the correlation coefficient, the bias, the normalized bias, the mean absolute error (MAE), the mean absolute percentage error (MAPE), and the root mean square error (RMSE). A measure of variability simulated by each model is computed as the ratio of its standard deviation, in both space and time, to the corresponding standard deviation of the observation. The correlation and MAPE statistics are also computed for each of the nine climate divisions of Louisiana. Some of the patterns that we observed are: 1) Average annual precipitation rate shows similar spatial distribution for all the models within a range of 3.27 to 4.75 mm/day from Northwest to Southeast. 2) Standard deviation of summer (JJA) precipitation (mm/day) for the models maintains lower value than the observation whereas they have similar spatial patterns and range of values in winter (NDJ). 3) Correlation coefficients of annual precipitation of models against observation have a range of -0.48 to 0.36 with variable spatial distribution by model. 4) Most of the models show negative correlation coefficients in summer and positive in winter. 5) MAE shows similar spatial distribution for all the models within a range of 5.20 to 7.43 mm/day from Northwest to Southeast of Louisiana. 6) Highest values of correlation coefficients are found at seasonal scale within a range of 0.36 to 0.46.
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoenig, M.; Elsen, Y.V.; Cauter, R.V.
The progressive degradation of the pyrolytic graphite surface of atomizers provides variable and misleading results of molybdenum peak-height measurements. The changes in the peak shapes produce no analytical problems during the lifetime of the atomizer (approx.300 firings) when integrated absorbance (A.s signals) is considered and the possible base-line drifts are controlled. This was demonstrated on plant samples mineralized by simple digestion with a mixture of HNO/sub 3/ and H/sub 2/O/sub 2/. The value of this method was assessed by comparison with a standard dry oxidation method and by molybdenum determination in National Bureau of Standards reference plant samples. The relativemore » standard deviations (n = 5) of the full analytical procedure do not exceed 7%. 13 references, 3 figures, 3 tables.« less
Spatial variability of soil water content in the covered catchment at Gårdsjön, Sweden
NASA Astrophysics Data System (ADS)
Nyberg, Lars
1996-01-01
The spatial variability of soil water content was investigated for a 6300 m2 covered catchment on the Swedish west coast. The catchment podzol soil is developed in a sandy - silty till with a mean depth of 43 cm and the dominant vegetation is Norway spruce. The acid precipitation is removed by a plastic roof and replaced with lake water irrigated under the tree canopies. On two occasions, in April and May 1993, TDR measurements were made at 57-73 points in the catchment using 15 and 30 cm long vertically installed probes. The water content pattern at the two dates, which occurred during a relatively dry period, were similar. The range of water content was large, from 5 to 60%. In May 1993 measurements also were made in areas of 10 × 10 m, 1 × 1 m and 0·2 × 0·2 m. The range and standard deviation for the 10 × 10 m area, which apart from a small-scale variability in soil hydraulic properties and fine root distribution also had a heterogeneous micro- and macro-topography, was similar to the range and standard deviation for the catchment. The 1 × 1 m and 0·2 × 0·2 m areas had considerably lower variability. Semi-variogram models for the water content had a range of influence of about 20 m. If data were paired in the east--west direction the semi-variance reflected the topography of the central valley and had a maximum for data pairs with internal distances of 20-40 m. The correlation between soil water content and topographic index, especially when averaged for the eight topographically homogeneous subareas, indicated the macro-topography as the cause of a large part of the water content variability.
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2018-03-01
The short-term temporal variability of landfill methane emissions is not well understood due to uncertainty in measurement methods. Significant variability is seen over short-term measurement campaigns with the tracer dilution method (TDM), but this variability may be due in part to measurement error rather than fluctuations in the actual landfill emissions. In this study, landfill methane emissions and TDM-measured emissions are simulated over a real landfill in Delaware, USA using the Weather Research and Forecasting model (WRF) for two emissions scenarios. In the steady emissions scenario, a constant landfill emissions rate is prescribed at each model grid point on the surface of the landfill. In the unsteady emissions scenario, emissions are calculated at each time step as a function of the local surface wind speed, resulting in variable emissions over each 1.5-h measurement period. The simulation output is used to assess the standard deviation and percent error of the TDM-measured emissions. Eight measurement periods are simulated over two different days to look at different conditions. Results show that standard deviation of the TDM- measured emissions does not increase significantly from the steady emissions simulations to the unsteady emissions scenarios, indicating that the TDM may have inherent errors in its prediction of emissions fluctuations. Results also show that TDM error does not increase significantly from the steady to the unsteady emissions simulations. This indicates that introducing variability to the landfill emissions does not increase errors in the TDM at this site. Across all simulations, TDM errors range from -15% to 43%, consistent with the range of errors seen in previous TDM studies. Simulations indicate diurnal variations of methane emissions when wind effects are significant, which may be important when developing daily and annual emissions estimates from limited field data. Copyright © 2017 Elsevier Ltd. All rights reserved.
de Castro, Bianca C R; Guida, Heraldo L; Roque, Adriano L; de Abreu, Luiz Carlos; Ferreira, Celso; Marcomini, Renata S; Monteiro, Carlos B M; Adami, Fernando; Ribeiro, Viviane F; Fonseca, Fernando L A; Santos, Vilma N S; Valenti, Vitor E
2014-01-01
It is poor in the literature the behavior of the geometric indices of heart rate variability (HRV) during the musical auditory stimulation. The objective is to investigate the acute effects of classic musical auditory stimulation on the geometric indexes of HRV in women in response to the postural change maneuver (PCM). We evaluated 11 healthy women between 18 and 25 years old. We analyzed the following indices: Triangular index, Triangular interpolation of RR intervals and Poincarι plot (standard deviation of the instantaneous variability of the beat-to beat heart rate [SD1], standard deviation of long-term continuous RR interval variability and Ratio between the short - and long-term variations of RR intervals [SD1/SD2] ratio). HRV was recorded at seated rest for 10 min. The women quickly stood up from a seated position in up to 3 s and remained standing still for 15 min. HRV was recorded at the following periods: Rest, 0-5 min, 5-10 min and 10-15 min during standing. In the second protocol, the subject was exposed to auditory musical stimulation (Pachelbel-Canon in D) for 10 min at seated position before standing position. Shapiro-Wilk to verify normality of data and ANOVA for repeated measures followed by the Bonferroni test for parametric variables and Friedman's followed by the Dunn's posttest for non-parametric distributions. In the first protocol, all indices were reduced at 10-15 min after the volunteers stood up. In the protocol musical auditory stimulation, the SD1 index was reduced at 5-10 min after the volunteers stood up compared with the music period. The SD1/SD2 ratio was decreased at control and music period compared with 5-10 min after the volunteers stood up. Musical auditory stimulation attenuates the cardiac autonomic responses to the PCM.
Extreme events, trends, and variability in Northern Hemisphere lake-ice phenology (1855-2005)
Benson, Barbara J.; Magnuson, John J.; Jensen, Olaf P.; Card, Virginia M.; Hodgkins, Glenn; Korhonen, Johanna; Livingstone, David M.; Stewart, Kenton M.; Weyhenmeyer, Gesa A.; Granin, Nick G.
2012-01-01
Often extreme events, more than changes in mean conditions, have the greatest impact on the environment and human well-being. Here we examine changes in the occurrence of extremes in the timing of the annual formation and disappearance of lake ice in the Northern Hemisphere. Both changes in the mean condition and in variability around the mean condition can alter the probability of extreme events. Using long-term ice phenology data covering two periods 1855–6 to 2004–5 and 1905–6 to 2004–5 for a total of 75 lakes, we examined patterns in long-term trends and variability in the context of understanding the occurrence of extreme events. We also examined patterns in trends for a 30-year subset (1975–6 to 2004–5) of the 100-year data set. Trends for ice variables in the recent 30-year period were steeper than those in the 100- and 150-year periods, and trends in the 150-year period were steeper than in the 100-year period. Ranges of rates of change (days per decade) among time periods based on linear regression were 0.3−1.6 later for freeze, 0.5−1.9 earlier for breakup, and 0.7−4.3 shorter for duration. Mostly, standard deviation did not change, or it decreased in the 150-year and 100-year periods. During the recent 50-year period, standard deviation calculated in 10-year windows increased for all ice measures. For the 150-year and 100-year periods changes in the mean ice dates rather than changes in variability most strongly influenced the significant increases in the frequency of extreme lake ice events associated with warmer conditions and decreases in the frequency of extreme events associated with cooler conditions.
A practical method to fabricate gold substrates for surface-enhanced Raman spectroscopy.
Tantra, Ratna; Brown, Richard J C; Milton, Martin J T; Gohil, Dipak
2008-09-01
We describe a practical method of fabricating surface-enhanced Raman spectroscopy (SERS) substrates based on dip-coating poly-L-lysine derivatized microscope slides in a gold colloidal suspension. The use of only commercially available starting materials in this preparation is particularly advantageous, aimed at both reducing time and the inconsistency associated with surface modification of substrates. The success of colloid deposition has been demonstrated by scanning electron microscopy (SEM) and the corresponding SERS response (giving performance comparable to the corresponding traditional colloidal SERS substrates). Reproducibility was evaluated by conducting replicate measurements across six different locations on the substrate and assessing the extent of the variability (standard deviation values of spectral parameters: peak width and height), in response to either Rhodamine 6G or Isoniazid. Of particular interest is the observation of how some peaks in a given spectrum are more susceptible to data variability than others. For example, in a Rhodamine 6G SERS spectrum, spectral parameters of the peak at 775 cm(-1) were shown to have a relative standard deviation (RSD) % of <10%, while the peak at 1573 cm(-1) has a RSD of >or=10%. This observation is best explained by taking into account spectral variations that arise from the effect of a chemisorption process and the local nature of chemical enhancement mechanisms, which affects the enhancement of some spectral peaks but not others (analogous to resonant Raman phenomenon).
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Multifocal visual evoked potentials for early glaucoma detection.
Weizer, Jennifer S; Musch, David C; Niziol, Leslie M; Khan, Naheed W
2012-07-01
To compare multifocal visual evoked potentials (mfVEP) with other detection methods in early open-angle glaucoma. Ten patients with suspected glaucoma and 5 with early open-angle glaucoma underwent mfVEP, standard automated perimetry (SAP), short-wave automated perimetry, frequency-doubling technology perimetry, and nerve fiber layer optical coherence tomography. Nineteen healthy control subjects underwent mfVEP and SAP for comparison. Comparisons between groups involving continuous variables were made using independent t tests; for categorical variables, Fisher's exact test was used. Monocular mfVEP cluster defects were associated with an increased SAP pattern standard deviation (P = .0195). Visual fields that showed interocular mfVEP cluster defects were more likely to also show superior quadrant nerve fiber layer thinning by OCT (P = .0152). Multifocal visual evoked potential cluster defects are associated with a functional and an anatomic measure that both relate to glaucomatous optic neuropathy. Copyright 2012, SLACK Incorporated.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Wang, Guiqin; Wu, Yangsiqian; Lin, Yangting
2016-02-28
Nearly 99% of the total content of extraterrestrial metals is composed of Fe and Ni, but with greatly variable trace element contents. The accuracy obtained in the inductively coupled plasma mass spectrometry (ICP-MS) analysis of solutions of these samples can be significantly influenced by matrix contents, polyatomic ion interference, and the concentrations of external standard solutions. An ICP-MS instrument (X Series 2) was used to determine 30 standard solutions with different concentrations of trace elements, and different matrix contents. Based on these measurements, the matrix effects were determined. Three iron meteorites were dissolved separately in aqua regia and HNO3. Deviations due to variation of matrix contents in the external standard solutions were evaluated and the analysis results of the two digestion methods for iron meteorites were assessed. Our results show obvious deviations due to unmatched matrix contents in the external standard solutions. Furthermore, discrepancy in the measurement of some elements was found between the sample solutions prepared with aqua regia and HNO3, due to loss of chloride during sample preparation and/or incomplete digestion of highly siderophile elements in iron meteorites. An accurate ICP-MS analysis method for extraterrestrial metal samples has been established using external standard solutions with matched matrix contents and digesting the samples with HNO3 and aqua regia. Using the data from this work, the Mundrabilla iron meteorite previously classified as IAB-ung is reclassified as IAB-MG. Copyright © 2016 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
Kook, Michael S; Cho, Hyun-soo; Seong, Mincheol; Choi, Jaewan
2005-11-01
To evaluate the ability of scanning laser polarimetry parameters and a novel deviation map algorithm to discriminate between healthy and early glaucomatous eyes with localized visual field (VF) defects confined to one hemifield. Prospective case-control study. Seventy glaucomatous eyes with localized VF defects and 66 normal controls. A Humphrey field analyzer 24-2 full-threshold test and scanning laser polarimetry with variable corneal compensation were used. We assessed the sensitivity and specificity of scanning laser polarimetry parameters, sensitivity and cutoff values for scanning laser polarimetry deviation map algorithms at different specificity values (80%, 90%, and 95%) in the detection of glaucoma, and correlations between the algorithms of scanning laser polarimetry and of the pattern deviation derived from Humphrey field analyzer testing. There were significant differences between the glaucoma group and normal subjects in the mean parametric values of the temporal, superior, nasal, inferior, temporal (TSNIT) average, superior average, inferior average, and TSNIT standard deviation (SD) (P<0.05). The sensitivity and specificity of each scanning laser polarimetry variable was as follows: TSNIT, 44.3% (95% confidence interval [CI], 39.8%-49.8%) and 100% (95.4%-100%); superior average, 30% (25.5%-34.5%) and 97% (93.5%-100%); inferior average, 45.7% (42.2%-49.2%) and 100% (95.8%-100%); and TSNIT SD, 30% (25.9%-34.1%) and 97% (93.2%-100%), respectively (when abnormal was defined as P<0.05). Based on nerve fiber indicator cutoff values of > or =30 and > or =51 to indicate glaucoma, sensitivities were 54.3% (50.1%-58.5%) and 10% (6.4%-13.6%), and specificities were 97% (93.2%-100%) and 100% (95.8%-100%), respectively. The range of areas under the receiver operating characteristic curves using the scanning laser polarimetry deviation map algorithm was 0.790 to 0.879. Overall sensitivities combining each probability scale and severity score at 80%, 90%, and 95% specificities were 90.0% (95% CI, 86.4%-93.6%), 71.4% (67.4%-75.4%), and 60.0% (56.2%-63.8%), respectively. There was a statistically significant correlation between the scanning laser polarimetry severity score and the VF severity score (R2 = 0.360, P<0.001). Scanning laser polarimetry parameters may not be sufficiently sensitive to detect glaucomatous patients with localized VF damage. Our algorithm using the scanning laser polarimetry deviation map may enhance the understanding of scanning laser polarimetry printouts in terms of the locality, deviation size, and severity of localized retinal nerve fiber layer defects in eyes with localized VF loss.
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C
2009-11-01
During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.
Demonstration of the Gore Module for Passive Ground Water Sampling
2014-06-01
ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
Automation of the anthrone assay for carbohydrate concentration determinations.
Turula, Vincent E; Gore, Thomas; Singh, Suddham; Arumugham, Rasappa G
2010-03-01
Reported is the adaptation of a manual polysaccharide assay applicable for glycoconjugate vaccines such as Prevenar to an automated liquid handling system (LHS) for improved performance. The anthrone assay is used for carbohydrate concentration determinations and was scaled to the microtiter plate format with appropriate mixing, dispensing, and measuring operations. Adaptation and development of the LHS platform was performed with both dextran polysaccharides of various sizes and pneumococcal serotype 6A polysaccharide (PnPs 6A). A standard plate configuration was programmed such that the LHS diluted both calibration standards and a test sample multiple times with six replicate preparations per dilution. This extent of replication minimized the effect of any single deviation or delivery error that might have occurred. Analysis of the dextran polymers ranging in size from 214 kDa to 3.755 MDa showed that regardless of polymer chain length the hydrolysis was complete, as evident by uniform concentration measurements. No plate positional absorbance bias was observed; of 12 plates analyzed to examine positional bias the largest deviation observed was 0.02% percent relative standard deviation (%RSD). The high purity dextran also afforded the opportunity to assess LHS accuracy; nine replicate analyses of dextran yielded a mean accuracy of 101% recovery. As for precision, a total of 22 unique analyses were performed on a single lot of PnPs 6A, and the resulting variability was 2.5% RSD. This work demonstrated the capability of a LHS to perform the anthrone assay consistently and a reduced assay cycle time for greater laboratory capacity.
Intraindividual variability in cognitive performance in persons with chronic fatigue syndrome.
Fuentes, K; Hunter, M A; Strauss, E; Hultsch, D F
2001-05-01
Studies of cognitive performance among persons with chronic fatigue syndrome (CFS) have yielded inconsistent results. We sought to contribute to findings in this area by examining intraindividual variability as well as level of performance in cognitive functioning. A battery of cognitive measures was administered to 14 CFS patients and 16 healthy individuals on 10 weekly occasions. Analyses comparing the two groups in terms of level of performance defined by latency and accuracy scores revealed that the CFS patients were slower but not less accurate than healthy persons. The CFS group showed greater intraindividual variability (as measured by intraindividual standard deviations and coefficients of variation) than the healthy group, although the results varied by task and time frame. Intraindividual variability was found to be stable across time and correlated across tasks at each testing occasion. Intraindividual variability also uniquely differentiated the groups. The present findings support the proposition that intraindividual variability is a meaningful indicator of cognitive functioning in CFS patients.
Variability of indication criteria in knee and hip replacement: an observational study
2010-01-01
Background Total knee (TKR) and hip (THR) replacement (arthroplasty) are effective surgical procedures that relieve pain, improve patients' quality of life and increase functional capacity. Studies on variations in medical practice usually place the indications for performing these procedures to be highly variable, because surgeons appear to follow different criteria when recommending surgery in patients with different severity levels. We therefore proposed a study to evaluate inter-hospital variability in arthroplasty indication. Methods The pre-surgical condition of 1603 patients included was compared by their personal characteristics, clinical situation and self-perceived health status. Patients were asked to complete two health-related quality of life questionnaires: the generic SF-12 (Short Form) and the specific WOMAC (Western Ontario and Mcmaster Universities) scale. The type of patient undergoing primary arthroplasty was similar in the 15 different hospitals evaluated. The variability in baseline WOMAC score between hospitals in THR and TKR indication was described by range, mean and standard deviation (SD), mean and standard deviation weighted by the number of procedures at each hospital, high/low ratio or extremal quotient (EQ5-95), variation coefficient (CV5-95) and weighted variation coefficient (WCV5-95) for 5-95 percentile range. The variability in subjective and objective signs was evaluated using median, range and WCV5-95. The appropriateness of the procedures performed was calculated using a specific threshold proposed by Quintana et al for assessing pain and functional capacity. Results The variability expressed as WCV5-95 was very low, between 0.05 and 0.11 for all three dimensions on WOMAC scale for both types of procedure in all participating hospitals. The variability in the physical and mental SF-12 components was very low for both types of procedure (0.08 and 0.07 for hip and 0.03 and 0.07 for knee surgery patients). However, a moderate-high variability was detected in subjective-objective signs. Among all the surgeries performed, approximately a quarter of them could be considered to be inappropriate. Conclusions A greater inter-hospital variability was observed for objective than for subjective signs for both procedures, suggesting that the differences in clinical criteria followed by surgeons when indicating arthroplasty are the main responsible factors for the variation in surgery rates. PMID:20977745
Variability of indication criteria in knee and hip replacement: an observational study.
Cobos, Raquel; Latorre, Amaia; Aizpuru, Felipe; Guenaga, Jose I; Sarasqueta, Cristina; Escobar, Antonio; García, Lidia; Herrera-Espiñeira, Carmen
2010-10-26
Total knee (TKR) and hip (THR) replacement (arthroplasty) are effective surgical procedures that relieve pain, improve patients' quality of life and increase functional capacity. Studies on variations in medical practice usually place the indications for performing these procedures to be highly variable, because surgeons appear to follow different criteria when recommending surgery in patients with different severity levels. We therefore proposed a study to evaluate inter-hospital variability in arthroplasty indication. The pre-surgical condition of 1603 patients included was compared by their personal characteristics, clinical situation and self-perceived health status. Patients were asked to complete two health-related quality of life questionnaires: the generic SF-12 (Short Form) and the specific WOMAC (Western Ontario and Mcmaster Universities) scale. The type of patient undergoing primary arthroplasty was similar in the 15 different hospitals evaluated.The variability in baseline WOMAC score between hospitals in THR and TKR indication was described by range, mean and standard deviation (SD), mean and standard deviation weighted by the number of procedures at each hospital, high/low ratio or extremal quotient (EQ5-95), variation coefficient (CV5-95) and weighted variation coefficient (WCV5-95) for 5-95 percentile range. The variability in subjective and objective signs was evaluated using median, range and WCV5-95. The appropriateness of the procedures performed was calculated using a specific threshold proposed by Quintana et al for assessing pain and functional capacity. The variability expressed as WCV5-95 was very low, between 0.05 and 0.11 for all three dimensions on WOMAC scale for both types of procedure in all participating hospitals. The variability in the physical and mental SF-12 components was very low for both types of procedure (0.08 and 0.07 for hip and 0.03 and 0.07 for knee surgery patients). However, a moderate-high variability was detected in subjective-objective signs. Among all the surgeries performed, approximately a quarter of them could be considered to be inappropriate. A greater inter-hospital variability was observed for objective than for subjective signs for both procedures, suggesting that the differences in clinical criteria followed by surgeons when indicating arthroplasty are the main responsible factors for the variation in surgery rates.
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
Estimation of the neural drive to the muscle from surface electromyograms
NASA Astrophysics Data System (ADS)
Hofmann, David
Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.
Earth Global Reference Atmospheric Model (Earth-GRAM) GRAM Virtual Meeting
NASA Technical Reports Server (NTRS)
White, Patrick
2017-01-01
What is Earth-GRAM? Provide monthly mean and standard deviation for any point in atmosphere; Monthly, Geographic, and Altitude Variation. Earth-GRAM is a C++ software package; Currently distributed as Earth-GRAM 2016. Atmospheric variables included: pressure, density, temperature, horizontal and vertical winds, speed of sound, and atmospheric constituents. Used by engineering community because of ability to create dispersions inatmosphere at a rapid runtime; Often embedded in trajectory simulation software. Not a forecast model. Does not readily capture localized atmospheric effects.
Brumbaugh, William G.; Hammerschmidt, Chad R.; Zanella, Luciana; Rogevich, Emily; Salata, Gregory; Bolek, Radoslaw
2011-01-01
An interlaboratory comparison of acid-volatile sulfide (AVS) and simultaneously extracted nickel (SEM_Ni) measurements of sediments was conducted among five independent laboratories. Relative standard deviations for the seven test samples ranged from 5.6 to 71% (mean = 25%) for AVS and from 5.5 to 15% (mean = 10%) for SEM_Ni. These results are in stark contrast to a recently published study that indicated AVS and SEM analyses were highly variable among laboratories.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
Standardization of 63Ni by 4πβ Liquid Scintillation Spectrometry With 3H-Standard Efficiency Tracing
Zimmerman, B. E.; Collé, R.
1997-01-01
The low energy (Eβmax = 66.945 keV ± 0.004 keV) β-emitter 63Ni has become increasingly important in the field of radionuclidic metrology. In addition to having a low β-endpoint energy, the relatively long half-life (101.1 a ± 1.4 a) makes it an appealing standard for such applications. This paper describes the recent preparation and calibration of a new solution Standard Reference Material of 63Ni, SRM 4226C, released by the National Institute of Standards and Technology. The massic activity CA for these standards was determined using 4πβ liquid scintillation (LS) spectrometry with 3H-standard efficiency tracing using the CIEMAT/NIST method, and is certified as 50.53 kBq ·g−1 ± 0.46 Bq · g−1 at the reference time of 1200 EST August 15, 1995. The uncertainty given is the expanded (coverage factor k = 2 and thus a 2 standard deviation estimate) uncertainty based on the evaluation of 28 different uncertainty components. These components were evaluated on the basis of an exhaustive number (976) of LS counting measurements investigating over 15 variables. Through the study of these variables it was found that LS cocktail water mass fraction and ion concentration play important roles in cocktail stability and consistency of counting results. The results of all of these experiments are discussed. PMID:27805155
Geochemical fingerprinting and source discrimination in soils at the continental scale
NASA Astrophysics Data System (ADS)
Negrel, Philippe; Sadeghi, Martiya; Ladenberger, Anna; Birke, Manfred; Reimann, Clemens
2014-05-01
Agricultural soil (Ap-horizon, 0-20 cm) samples were collected from a large part of Europe (33 countries, 5.6 million km2) at an average density of 1 sample site per 2500 km2. The resulting 2108 soil samples were air dried, sieved to <2 mm, milled and analysed for their major and trace element concentrations by wavelength dispersive X-ray fluorescence spectrometry (WD-XRF). The main goal of this study is to provide a global view of element mobility and source rocks at the continent scale, either by reference to crustal evolution or normalized patterns of element mobility during weathering processes. The survey area includes several sedimentary basins with different geological history, developed in different climate zones and landscapes and with different land use. In order to normalize the chemical composition of soils, mean values and standard deviation of the selected elements have been checked against values for the upper continental crust (UCC). Some elements turned out to be enriched relative to the UCC (Al, P, Zr, Pb) whereas others, like Mg, Na, Sr and Pb were depleted with regards to the variation represented by the standard deviation. The concept of UCC extended normalization patterns have been further used for the selected elements. The mean value of Rb, K, Y, Ti, Al, Si, Zr, Ce and Fe are very close to the UCC model even if standard deviation suggests slight enrichment or depletion, and Zr shows the best fit with the UCC model using both mean value and standard deviation. Lead and Cr are enriched in European soils when compared to UCC but their standard deviation values show very large variations, particularly towards very low values, which can be interpreted as a lithological effect. Element variability has been explored by looking at the variations using indicator elements. Soil data have been converted into Al-normalized enrichment factors and Na was applied as normalizing element for studying provenance source taking into account the main lithologies of the UCC. This latter normalization highlighted variations related to the soluble and insoluble behavior of some elements (K, Rb versus Ti, Al, Si, V, Y, Zr, Ba, and La, respectively), their reactivity (Fe, Mn, Zn), association with carbonates (Ca and Sr) and with phosphates (P and Ce). The maps of normalized composition revealed some problems with use of classical element ratios due to genetical differences in composition of parent material reflected, for example, in large differences in titanium content in bedrock and soil throughout the Europe.
Climate change and the detection of trends in annual runoff
McCabe, G.J.; Wolock, D.M.
1997-01-01
This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.
New Evidence on the Relationship Between Climate and Conflict
NASA Astrophysics Data System (ADS)
Burke, M.
2015-12-01
We synthesize a large new body of research on the relationship between climate and conflict. We consider many types of human conflict, ranging from interpersonal conflict -- domestic violence, road rage, assault, murder, and rape -- to intergroup conflict -- riots, coups, ethnic violence, land invasions, gang violence, and civil war. After harmonizing statistical specifications and standardizing estimated effect sizes within each conflict category, we implement a meta-analysis that allows us to estimate the mean effect of climate variation on conflict outcomes as well as quantify the degree of variability in this effect size across studies. Looking across more than 50 studies, we find that deviations from moderate temperatures and precipitation patterns systematically increase the risk of conflict, often substantially, with average effects that are highly statistically significant. We find that contemporaneous temperature has the largest average effect by far, with each 1 standard deviation increase toward warmer temperatures increasing the frequency of contemporaneous interpersonal conflict by 2% and of intergroup conflict by more than 10%. We also quantify substantial heterogeneity in these effect estimates across settings.
Heart rate variability analysed by Poincaré plot in patients with metabolic syndrome.
Kubičková, Alena; Kozumplík, Jiří; Nováková, Zuzana; Plachý, Martin; Jurák, Pavel; Lipoldová, Jolana
2016-01-01
The SD1 and SD2 indexes (standard deviations in two orthogonal directions of the Poincaré plot) carry similar information to the spectral density power of the high and low frequency bands but have the advantage of easier calculation and lesser stationarity dependence. ECG signals from metabolic syndrome (MetS) and control group patients during tilt table test under controlled breathing (20 breaths/minute) were obtained. SD1, SD2, SDRR (standard deviation of RR intervals) and RMSSD (root mean square of successive differences of RR intervals) were evaluated for 31 control group and 33 MetS subjects. Statistically significant lower values were observed in MetS patients in supine position (SD1: p=0.03, SD2: p=0.002, SDRR: p=0.006, RMSSD: p=0.01) and during tilt (SD2: p=0.004, SDRR: p=0.007). SD1 and SD2 combining the advantages of time and frequency domain methods, distinguish successfully between MetS and control subjects. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Issaadi, N.; Hamami, A. A.; Belarbi, R.; Aït-Mokhtar, A.
2017-10-01
In this paper, spatial variabilities of some transfer and storage properties of a concrete wall were assessed. The studied parameters deal with water porosity, water vapor permeability, intrinsic permeability and water vapor sorption isotherms. For this purpose, a concrete wall was built in the laboratory and specimens were periodically taken and tested. The obtained results allow highlighting a statistical estimation of the mean value, the standard deviation and the spatial correlation length of the studied fields for each parameter. These results were discussed and a statistical analysis was performed in order to assess for each of these parameters the appropriate probability density function.
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
Spatial and temporal variability of the Aridity Index in Greece
NASA Astrophysics Data System (ADS)
Nastos, Panagiotis T.; Politi, Nadia; Kapsomenakis, John
2013-01-01
The objective of this paper is to study the spatial and temporal variability of the Aridity Index (AI) in Greece, per decade, during the 50-year period (1951-2000). Besides, the projected changes in ensemble mean AI between the period 1961-1990 (reference period) and the periods 2021-2050 (near future) and 2071-2100 (far future) along with the inter-model standard deviations were presented, based on the simulation results, derived from a number of Regional Climatic Models (RCMs), within the ENSEMBLE European Project. The projection of the future climate was done under SRES A1B. The climatic data used, concern monthly precipitation totals and air temperature from 28 meteorological stations (22 stations from the Hellenic National Meteorological Service and 6 stations from neighboring countries, taken from the Monthly Climatic Data for the World). The estimation of the AI was carried out based on the potential evapotranspiration (PET) defined by Thornthwaite (1948). The data processing was done by the application of the statistical package R-project and the Geographical Information Systems (GIS). The results of the analysis showed that, within the examined period (1951-2000), a progressive shift from the "humid" class, which characterized the wider area of Greece, towards the "sub-humid" and "semi-arid" classes appeared in the eastern Crete Island, the Cyclades complex, the Evia and Attica, that is mainly the eastern Greece. The most significant change appears during the period 1991-2000. The future projections at the end of twentieth century, using ensemble mean simulations from 8 RCMs, show that drier conditions are expected to establish in regions of Greece (Attica, eastern continental Greece, Cyclades, Dodecanese, eastern Crete Island and northern Aegean). The inter-model standard deviation over these regions ranges from 0.02 to 0.05 against high values (0.09-0.15) illustrated in western mountainous continental Greece, during 2021-2050. Higher values of inter-model standard deviation appear in the 2071-2100 ranging from 0.02 to 0.10 reaching even 0.50 over mountainous regions of the country.
Hydrological Dynamics of Central America: Time-of-Emergence of the Global Warming Signal
NASA Astrophysics Data System (ADS)
Imbach, P. A.; Georgiou, S.; Calderer, L.; Coto, A.; Nakaegawa, T.; Chou, S. C.; Lyra, A. A.; Hidalgo, H. G.; Ciais, P.
2016-12-01
Central America is among the world's most vulnerable regions to climate variability and change. Country economies are highly dependent on the agricultural sector and over 40 million people's rural livelihoods directly depend on the use of natural resources. Future climate scenarios show a drier outlook (higher temperatures and lower precipitation) over a region where rural livelihoods are already compromised by water availability and climate variability. Previous efforts to validate modelling of the regional hydrology have been based on high resolution (1 km2) equilibrium models (Imbach et al., 2010) or using dynamic models (Variable Infiltration Capacity) with coarse climate forcing (0.5°) (Hidalgo et al., 2013; Maurer et al., 2009). We present here: (i) validation of the hydrological outputs from high-resolution simulations (10 km2) of a dynamic vegetation model (Orchidee), using 7 different sets of model input forcing data, with monthly runoff observations from 182 catchments across Central America; (ii) the first assessments of the region's hydrological variability using the historical simulations (iii) an estimation of the time of emergence of the climate change signal (under the SRES emission scenarios) on the water balance. We found model performance to be comparable with that from studies in other world regions (Yang et al. 2016) when forced with high resolution precipitation data (monthly values at 5 km2, Funk et al. (2015)) and the Climate Research Unit (CRU 3.2, Harris et al. (2014)) dataset of meteorological parameters. Validation results showed a Pearson correlation coefficient ≈ 0.6, general underestimation of runoff of ≈ 60% and variability close to observed values (ratio of standard deviations of ≈ 0.7). Maps of historical runoff are presented to show areas where high runoff variability follows high mean annual runoff, with opposite trends over the Caribbean. Future scenarios show large areas where future maximum water availability will always fall below minus-one standard deviation of the historical values by mid-century. Additionally, our results highlight the time horizon left to develop adaptation strategies to cope with future reductions in water availability.
Barbado, David; Moreside, Janice; Vera-Garcia, Francisco J
2017-03-01
Although unstable seat methodology has been used to assess trunk postural control, the reliability of the variables that characterize it remains unclear. To analyze reliability and learning effect of center of pressure (COP) and kinematic parameters that characterize trunk postural control performance in unstable seating. The relationships between kinematic and COP parameters also were explored. Test-retest reliability design. Biomechanics laboratory setting. Twenty-three healthy male subjects. Participants volunteered to perform 3 sessions at 1-week intervals, each consisting of five 70-second balancing trials. A force platform and a motion capture system were used to measure COP and pelvis, thorax, and spine displacements. Reliability was assessed through standard error of measurement (SEM) and intraclass correlation coefficients (ICC 2,1 ) using 3 methods: (1) comparing the last trial score of each day; (2) comparing the best trial score of each day; and (3) calculating the average of the three last trial scores of each day. Standard deviation and mean velocity were calculated to assess balance performance. Although analyses of variance showed some differences in balance performance between days, these differences were not significant between days 2 and 3. Best result and average methods showed the greatest reliability. Mean velocity of the COP showed high reliability (0.71 < ICC < 0.86; 10.3 < SEM < 13.0), whereas standard deviation only showed a low to moderate reliability (0.37 < ICC < 0.61; 14.5 < SEM < 23.0). Regarding the kinematic variables, only pelvis displacement mean velocity achieved a high reliability using the average method (0.62 < ICC < 0.83; 18.8 < SEM < 23.1). Correlations between COP and kinematics were high only for mean velocity (0.45
Emergence of the significant local warming of Korea in CMIP5 projections
NASA Astrophysics Data System (ADS)
Boo, Kyung-On; Shim, Sungbo; Kim, Jee-Eun
2016-04-01
According to IPCC AR5, anthropogenic influence on warming is obvious in local scales, especially in some tropical regions. Detection of significant local warming is important for adaptation to climate change of society and ecosystem. Recently much attention has focused on the time of emergence (ToE) for the signal of anthropogenic climate change against the natural climate variability. Motivated from the previous studies, this study analyzes ToE of regional surface air temperature over Korea. Simulations of CMIP5 15 models are used for RCP 2.6, 4.5 and 8.5. For each year, JJA and DJF temperature anomalies are calculated for the time period 1900-1929. For noise of interannual variability, natural-only historical simulations of CMIP5 12 models are used and the standard deviation of the time series is obtained. For signal of warming, we examine the year when the signal above 2 standard deviations is detected in 80% of the models using 30-year smoothed time series. According to our results, interannual variability is larger in land than ocean. Seasonally, it is larger in winter than in summer. Accordingly, ToE of summertime temperature is earlier than that in winter and is expected to appear in 2030s from three RCPs. The seasonal difference is consistent with previous studies. Wintertime ToE appears in 2040s for RCP85 and 2060s for RCP4.5. The different emergence time between RCP8.5 and RCP4.5 reflects the influence of mitigation. In a similar way, daily maximum and minimum temperatures are analyzed. ToE of Tmin appears earlier than that of Tmax and difference is small. Acknowledgements. This study is supported by the National Institute of Meteorological Sciences, Korea Meteorological Administration (NIMR-2012-B-2).
Fetal heart rate and fetal heart rate variability in Lipizzaner broodmares.
Baska-Vincze, Boglárka; Baska, Ferenc; Szenci, Ottó
2015-03-01
Monitoring fetal heart rate (FHR) and fetal heart rate variability (FHRV) helps to understand and evaluate normal and pathological conditions in the foal. The aim of this study was to establish normal heart rate reference values for the ongoing equine pregnancy and to perform a heart rate variability (HRV) time-domain analysis in Lipizzaner mares. Seventeen middle- and late-term (days 121-333) pregnant Lipizzaner mares were examined using fetomaternal electrocardiography (ECG). The mean FHR (P = 0.004) and the standard deviation of FHR (P = 0.012) significantly decreased during the pregnancy. FHR ± SD values decreased from 115 ± 35 to 79 ± 9 bpm between months 5 and 11. Our data showed that HRV in the foal decreased as the pregnancy progressed, which is in contrast with the findings of earlier equine studies. The standard deviation of normal-normal intervals (SDNN) was higher (70 ± 25 to 166 ± 108 msec) than described previously. The root mean square of successive differences (RMSSD) decreased from 105 ± 69 to 77 ± 37 msec between the 5th and 11th month of gestation. Using telemetric ECG equipment, we could detect equine fetal heartbeat on day 121 for the first time. In addition, the large differences observed in the HR values of four mare-fetus pairs in four consecutive months support the assumption that there might be 'high-HR' and 'low-HR' fetuses in horses. It can be concluded that the analysis of FHR and FHRV is a promising tool for the assessment of fetal well-being but the applicability of these parameters in the clinical setting and in studs requires further investigation.
Ullman, Karen L; Ning, Holly; Susil, Robert C; Ayele, Asna; Jocelyn, Lucresse; Havelos, Jan; Guion, Peter; Xie, Huchen; Li, Guang; Arora, Barbara C; Cannon, Angela; Miller, Robert W; Norman Coleman, C; Camphausen, Kevin; Ménard, Cynthia
2006-01-01
Background We sought to determine the intra- and inter-radiation therapist reproducibility of a previously established matching technique for daily verification and correction of isocenter position relative to intraprostatic fiducial markers (FM). Materials and methods With the patient in the treatment position, anterior-posterior and left lateral electronic images are acquired on an amorphous silicon flat panel electronic portal imaging device. After each portal image is acquired, the therapist manually translates and aligns the fiducial markers in the image to the marker contours on the digitally reconstructed radiograph. The distances between the planned and actual isocenter location is displayed. In order to determine the reproducibility of this technique, four therapists repeated and recorded this operation two separate times on 20 previously acquired portal image datasets from two patients. The data were analyzed to obtain the mean variability in the distances measured between and within observers. Results The mean and median intra-observer variability ranged from 0.4 to 0.7 mm and 0.3 to 0.6 mm respectively with a standard deviation of 0.4 to 1.0 mm. Inter-observer results were similar with a mean variability of 0.9 mm, a median of 0.6 mm, and a standard deviation of 0.7 mm. When using a 5 mm threshold, only 0.5% of treatments will undergo a table shift due to intra or inter-observer error, increasing to an error rate of 2.4% if this threshold were reduced to 3 mm. Conclusion We have found high reproducibility with a previously established method for daily verification and correction of isocenter position relative to prostatic fiducial markers using electronic portal imaging. PMID:16722575
Schmidt, A; Biau, S; Möstl, E; Becker-Birck, M; Morillon, B; Aurich, J; Faure, J-M; Aurich, C
2010-04-01
It is widely accepted that transport is stressful for horses, but only a few studies are available involving horses that are transported regularly and are accustomed to transport. We determined salivary cortisol immunoreactivity (IR), fecal cortisol metabolites, beat-to-beat (RR) interval, and heart rate variability (HRV) in transport-experienced horses (N=7) in response to a 2-d outbound road transport over 1370 km and 2-d return transport 8 d later. Salivary cortisol IR was low until 60 min before transport but had increased (P<0.05) 30 min before loading. Transport caused a further marked increase (P<0.001), but the response tended to decrease with each day of transport. Concentrations of fecal cortisol metabolites increased on the second day of both outbound and return transports and reached a maximum the following day (P<0.001). During the first 90 min on Day 1 of outbound transport, mean RR interval decreased (P<0.001). Standard deviations of RR interval (SDRR) decreased transiently (P<0.01). The root mean square of successive RR differences (RMSSD) decreased at the beginning of the outbound and return transports (P<0.01), reflecting reduced parasympathetic tone. On the first day of both outbound and return transports, a transient rise in geometric HRV variable standard deviation 2 (SD2) occurred (P<0.01), indicating increased sympathetic activity. In conclusion, transport of experienced horses leads to increased cortisol release and changes in heart rate and HRV, which is indicative of stress. The degree of these changes tended to be most pronounced on the first day of both outbound and return transport. Copyright 2009 Elsevier Inc. All rights reserved.
Autonomic modulation of arterial pressure and heart rate variability in hypertensive diabetic rats.
Farah, Vera de Moura Azevedo; De Angelis, Kátia; Joaquim, Luis Fernando; Candido, Georgia O; Bernardes, Nathalia; Fazan, Rubens; Schaan, Beatriz D'Agord; Irigoyen, Maria-Claudia
2007-08-01
The aim of the present study was to evaluate the autonomic modulation of the cardiovascular system in streptozotocin (STZ)-induced diabetic spontaneously hypertensive rats (SHR), evaluating baroreflex sensitivity and arterial pressure and heart rate variability. Male SHR were divided in control (SHR) and diabetic (SHR+DM, 5 days after STZ) groups. Arterial pressure (AP) and baroreflex sensitivity (evaluated by tachycardic and bradycardic responses to changes in AP) were monitored. Autoregressive spectral estimation was performed for systolic AP (SAP) and pulse interval (PI) with oscillatory components quantified as low (LF:0.2-0.6Hz) and high (HF:0.6-3.0Hz) frequency ranges. Mean AP and heart rate in SHR+DM (131+/-3 mmHg and 276+/-6 bpm) were lower than in SHR (160+/-7 mmHg and 330+/-8 bpm). Baroreflex bradycardia was lower in SHR+DM as compared to SHR (0.55+/-0.1 vs. 0.97+/-0.1 bpm/mmHg). Overall SAP variability in the time domain (standard deviation of beat-by-beat time series of SAP) was lower in SHR+DM (3.1+/-0.2 mmHg) than in SHR (5.7+/-0.6 mmHg). The standard deviation of the PI was similar between groups. Diabetes reduced the LF of SAP (3.3+/-0.8 vs. 28.7+/-7.6 mmHg2 in SHR), while HF of SAP were unchanged. The power of oscillatory components of PI did not differ between groups. These results show that the association of hypertension and diabetes causes an impairment of the peripheral cardiovascular sympathetic modulation that could be, at least in part, responsible for the reduction in AP levels. Moreover, this study demonstrates that diabetes might actually impair the reduced buffer function of the baroreceptors while reducing blood pressure.
Howard, Charla L; Wallace, Chris; Abbas, James; Stokic, Dobrivoje S
2017-01-01
We developed and evaluated properties of a new measure of variability in stride length and cadence, termed residual standard deviation (RSD). To calculate RSD, stride length and cadence are regressed against velocity to derive the best fit line from which the variability (SD) of the distance between the actual and predicted data points is calculated. We examined construct, concurrent, and discriminative validity of RSD using dual-task paradigm in 14 below-knee prosthesis users and 13 age- and education-matched controls. Subjects walked first over an electronic walkway while performing separately a serial subtraction and backwards spelling task, and then at self-selected slow, normal, and fast speeds used to derive the best fit line for stride length and cadence against velocity. Construct validity was demonstrated by significantly greater increase in RSD during dual-task gait in prosthesis users than controls (group-by-condition interaction, stride length p=0.0006, cadence p=0.009). Concurrent validity was established against coefficient of variation (CV) by moderate-to-high correlations (r=0.50-0.87) between dual-task cost RSD and dual-task cost CV for both stride length and cadence in prosthesis users and controls. Discriminative validity was documented by the ability of dual-task cost calculated from RSD to effectively differentiate prosthesis users from controls (area under the receiver operating characteristic curve, stride length 0.863, p=0.001, cadence 0.808, p=0.007), which was better than the ability of dual-task cost CV (0.692, 0.648, respectively, not significant). These results validate RSD as a new measure of variability in below-knee prosthesis users. Future studies should include larger cohorts and other populations to ascertain its generalizability. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Halkides, D. J.; Waliser, Duane E.; Lee, Tong; Menemenlis, Dimitris; Guan, Bin
2015-02-01
Spatial and temporal variation of processes that determine ocean mixed-layer (ML) temperature (MLT) variability on the timescale of the Madden-Julian Oscillation (MJO) in the Tropical Indian Ocean (TIO) are examined in a heat-conserving ocean state estimate for years 1993-2011. We introduce a new metric for representing spatial variability of the relative importance of processes. In general, horizontal advection is most important at the Equator. Subsurface processes and surface heat flux are more important away from the Equator, with surface heat flux being the more dominant factor. Analyses at key sites are discussed in the context of local dynamics and literature. At 0°, 80.5°E, for MLT events > 2 standard deviations, ocean dynamics account for more than two thirds of the net tendency during cooling and warming phases. Zonal advection alone accounts for ˜40% of the net tendency. Moderate events (1-2 standard deviations) show more differences between events, and some are dominated by surface heat flux. At 8°S, 67°E in the Seychelles-Chagos Thermocline Ridge (SCTR) area, surface heat flux accounts for ˜70% of the tendency during strong cooling and warming phases; subsurface processes linked to ML depth (MLD) deepening (shoaling) during cooling (warming) account for ˜30%. MLT is more sensitive to subsurface processes in the SCTR, due to the thin MLD, thin barrier layer and raised thermocline. Results for 8°S, 67°E support assertions by Vialard et al. (2008) not previously confirmed due to measurement error that prevented budget closure and the small number of events studied. The roles of MLD, barrier layer thickness, and thermocline depth on different timescales are examined.
Inter- and intra-observer variation in soft-tissue sarcoma target definition.
Roberge, D; Skamene, T; Turcotte, R E; Powell, T; Saran, N; Freeman, C
2011-08-01
To evaluate inter- and intra-observer variability in gross tumor volume definition for adult limb/trunk soft tissue sarcomas. Imaging studies of 15 patients previously treated with preoperative radiation were used in this study. Five physicians (radiation oncologists, orthopedic surgeons and a musculoskeletal radiologist) were asked to contour each of the 15 tumors on T1-weighted, gadolinium-enhanced magnetic resonance images. These contours were drawn twice by each physician. The volume and center of mass coordinates for each gross tumor volume were extracted and a Boolean analysis was performed to measure the degree of volume overlap. The median standard deviation in gross tumor volumes across observers was 6.1% of the average volume (range: 1.8%-24.9%). There was remarkably little variation in the 3D position of the gross tumor volume center of mass. For the 15 patients, the standard deviation of the 3D distance between centers of mass ranged from 0.06 mm to 1.7 mm (median 0.1mm). Boolean analysis demonstrated that 53% to 90% of the gross tumor volume was common to all observers (median overlap: 79%). The standard deviation in gross tumor volumes on repeat contouring was 4.8% (range: 0.1-14.4%) with a standard deviation change in the position of the center of mass of 0.4mm (range: 0mm-2.6mm) and a median overlap of 93% (range: 73%-98%). Although significant inter-observer differences were seen in gross tumor volume definition of adult soft-tissue sarcoma, the center of mass of these volumes was remarkably consistent. Variations in volume definition did not correlate with tumor size. Radiation oncologists should not hesitate to review their contours with a colleague (surgeon, radiologist or fellow radiation oncologist) to ensure that they are not outliers in sarcoma gross tumor volume definition. Protocols should take into account variations in volume definition when considering tighter clinical target volumes. Copyright © 2011 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T
2016-05-15
Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.
Stewart, J M
2000-02-01
Invasive arterial monitoring alters autonomic tone. The effects of intravenous (i.v.) insertion are less clear. The author assessed the effects of i.v. insertion on autonomic activity in patients aged 11 to 19 years prior to head-up tilt by measuring heart rate, blood pressure, heart rate variability, blood pressure variability, and baroreceptor gain before and after i.v. insertion with continuous electrocardiography and arterial tonometry in patients with orthostatic tachycardia syndrome (OTS, N = 21), in patients who experienced simple fainting (N = 14), and in normal control subjects (N = 6). Five-minute samples were collected after 30 minutes supine. Fifteen minutes after i.v. insertion, data were collected again. These 5-minute samples were also collected in a separate control population without i.v. insertion after 30 minutes supine and again 30 minutes later. This population included 12 patients with OTS, 13 patients who experienced simple fainting, and 6 normal control subjects. Heart rate variability included the mean RR, the standard deviation of the RR interval (SDNN), and the root mean square of successive RR differences (RMSSD). Autoregressive spectral modeling was used. Low-frequency power (LFP, 0.04-0.15 Hz), high-frequency power (HFP, 0.15-0.40 Hz), and total power (TP, 0.01-0.40 Hz) were compared. Blood pressure variability included standard deviation of systolic blood pressure, LFP, and HFP. Baroreceptor gain at low frequency and high frequency was calculated from cross-spectral transfer function magnitudes when coherence was greater than 0.5. In patients with OTS, RR (790 +/- 50 msec), SDNN (54 +/- 6 msec), RMSSD (55 +/- 5 msec), LFP (422 +/- 200 ms2/Hz), HFP (846 +/- 400 ms2/Hz), and TP (1550 +/- 320 ms2/Hz) were less than in patients who experienced simple fainting (RR, 940 +/- 50 msec; SDNN, 84 +/- 10 msec; RMSSD, 91 +/- 7 msec; LFP, 880 +/- 342 ms2/Hz; HFP, 1720 +/- 210 ms2/Hz; and TP, 3228 +/- 490 ms2/Hz) or normal control subjects (RR, 920 +/- 30 msec; SDNN, 110 +/- 29 msec; RMSSD, 120 +/- 16 msec; LFP, 1600 +/- 331 ms2/Hz; HFP, 2700 +/- 526 ms2/Hz; and TP, 5400 +/- 1017 ms2/Hz). Blood pressure and blood pressure variability were not different in any group. Standard deviation, LFP, and HFP were, respectively, 5.24 +/- 0.8 mm Hg, 1.2 +/- 0.2, and 1.5 +/- 0.3 for patients with OTS; 4.6 +/- 0.4 mm Hg, 1.2 +/- 0.2, and 1.4 +/- 0.3 for patients who experienced simple fainting; and 5.55 +/- 1.0 mm Hg, 1.4 +/- 0.2, and 1.6 +/- 0.3 for normal control subjects. Baroreceptor gain at low frequency and high frequency in patients with OTS (16 +/- 4 msec/mm Hg, 17 +/- 5) was comparable to that in patients who experienced simple fainting (33 +/- 4, 32 +/- 3) and that in normal control subjects (31 +/- 8, 37 +/- 9). Heart rate variability differed between patients with OTS and patients who experienced simple fainting or normal control subjects, and blood pressure and blood pressure variability were not different, but no parameter changed after i.v. insertion. There were no differences from the groups that did not receive i.v. insertions. Data suggest, at most, a limited effect of i.v. insertion on autonomic function in adolescents.
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.
Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R
2016-11-01
Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.
Quantitative characterization of color Doppler images: reproducibility, accuracy, and limitations.
Delorme, S; Weisser, G; Zuna, I; Fein, M; Lorenz, A; van Kaick, G
1995-01-01
A computer-based quantitative analysis for color Doppler images of complex vascular formations is presented. The red-green-blue-signal from an Acuson XP10 is frame-grabbed and digitized. By matching each image pixel with the color bar, color pixels are identified and assigned to the corresponding flow velocity (color value). Data analysis consists of delineation of a region of interest and calculation of the relative number of color pixels in this region (color pixel density) as well as the mean color value. The mean color value was compared to flow velocities in a flow phantom. The thyroid and carotid artery in a volunteer were repeatedly examined by a single examiner to assess intra-observer variability. The thyroids in five healthy controls were examined by three experienced physicians to assess the extent of inter-observer variability and observer bias. The correlation between the mean color value and flow velocity ranged from 0.94 to 0.96 for a range of velocities determined by pulse repetition frequency. The average deviation of the mean color value from the flow velocity was 22% to 41%, depending on the selected pulse repetition frequency (range of deviations, -46% to +66%). Flow velocity was underestimated with inadequately low pulse repetition frequency, or inadequately high reject threshold. An overestimation occurred with inadequately high pulse repetition frequency. The highest intra-observer variability was 22% (relative standard deviation) for the color pixel density, and 9.1% for the mean color value. The inter-observer variation was approximately 30% for the color pixel density, and 20% for the mean color value. In conclusion, computer assisted image analysis permits an objective description of color Doppler images. However, the user must be aware that image acquisition under in vivo conditions as well as physical and instrumental factors may considerably influence the results.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
NASA Astrophysics Data System (ADS)
Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.
2017-11-01
Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Sales, Allan R K; Silva, Bruno M; Neves, Fabricia J; Rocha, Natália G; Medeiros, Renata F; Castro, Renata R T; Nóbrega, Antonio C L
2012-09-01
Despite mortality from heart disease has been decreasing, the decline in death in women remains lower than in men. Hypertension (HT) is a major risk factor for cardiovascular disease. Therefore, approaches to prevent or delay the onset of HT would be valuable in women. Given this background, we investigated the effect of diet and exercise training on blood pressure (BP) and autonomic modulation in women with prehypertension (PHT). Ten women with PHT (39 ± 6 years, mean ± standard deviation) and ten with normotension (NT) (35 ± 11 years) underwent diet and exercise training for 12 weeks. Autonomic modulation was assessed through heart rate (HR) and systolic BP (SBP) variability, using time and frequency domain analyses. At preintervention, women with PHT had higher SBP (PHT: 128 ± 7 vs. NT: 111 ± 6 mmHg, p < 0.05) and lower HR variability [standard deviation of normal-to-normal beats (SDNN), PHT: 41 ± 18 vs. NT: 60 ± 19 ms, p < 0.05]. At post-intervention, peak oxygen consumption and muscular strength increased (p < 0.05), while body mass index decreased in both groups (p < 0.05). However, SBP decreased (118 ± 8 mmHg, p < 0.05 vs. preintervention) and total HR variability tended to increase (total power: 1,397 ± 570 vs. 2,137 ± 1,110 ms(2), p = 0.08) only in the group with PHT; consequently, HR variability became similar between groups at post-intervention (p > 0.05). Moreover, reduction in SBP was associated with augmentation in SDNN (r = -0.46, p < 0.05) and reduction in low-frequency power [LF (n.u.); r = 0.46, p < 0.05]. In conclusion, diet and exercise training reduced SBP in women with PHT, and this was associated with augmentation in parasympathetic and probably reduction in sympathetic cardiac modulation.
Future Warming Increases Global Maize Yield Variability with Implications for Food Markets
NASA Astrophysics Data System (ADS)
Tigchelaar, M.; Battisti, D. S.; Naylor, R. L.; Ray, D. K.
2017-12-01
If current trends in population growth and dietary shifts continue, the world will need to produce about 70% more food by 2050, while earth's climate is rapidly changing. Rising temperatures in particular are projected to negatively impact agricultural production, as the world's staple crops perform poorly in extreme heat. Theoretical models suggest that as temperatures rise above plants' optimal temperature for performance, not only will mean yields decline rapidly, but the variability of yields will increase, even as interannual variations in climate remain unchanged. Here we use global datasets of maize production and climate variability combined with CMIP5 temperature projections to quantify how yield variability will change in major maize producing countries under 2°C and 4°C of global warming. Maize is the world's most produced crop, and is linked to other staple crops through substitution in consumption and production. We find that in warmer climates - absent any breeding gains in heat tolerance - the Coefficient of Variation (CV) of maize yields increases almost everywhere, to values much larger than present-day. This increase in CV is due both to an increase in the standard deviation of yields, and a decrease in mean yields. In locations where crop failures become the norm under high (4°C) warming (mostly in tropical, low-yield environments), the standard deviation of yields ultimately decreases. The probability that in any given year the most productive areas in the top three maize producing countries (United States, China, Brazil) have simultaneous production losses greater than 10% is virtually zero under present-day climate conditions, but increases to 12% under 2°C warming, and 89% under 4°C warming. This has major implications for global food markets and staple crop prices, affecting especially the 2.5 billion people that comprise the world's poor, who already spend the majority of their disposable income on food and are particularly vulnerable to agricultural price spikes.
NASA Astrophysics Data System (ADS)
Malanson, G. P.; DeRose, R. J.; Bekker, M. F.
2016-12-01
The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.
A cross-scale approach to understand drought-induced variability of sagebrush ecosystem productivity
NASA Astrophysics Data System (ADS)
Assal, T.; Anderson, P. J.
2016-12-01
Sagebrush (Artemisia spp.) mortality has recently been reported in the Upper Green River Basin (Wyoming, USA) of the sagebrush steppe of western North America. Numerous causes have been suggested, but recent drought (2012-13) is the likely mechanism of mortality in this water-limited ecosystem which provides critical habitat for many species of wildlife. An understanding of the variability in patterns of productivity with respect to climate is essential to exploit landscape scale remote sensing for detection of subtle changes associated with mortality in this sparse, uniformly vegetated ecosystem. We used the standardized precipitation index to characterize drought conditions and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery (250-m resolution) to characterize broad characteristics of growing season productivity. We calculated per-pixel growing season anomalies over a 16-year period (2000-2015) to identify the spatial and temporal variability in productivity. Metrics derived from Landsat satellite imagery (30-m resolution) were used to further investigate trends within anomalous areas at local scales. We found evidence to support an initial hypothesis that antecedent winter drought was most important in explaining reduced productivity. The results indicate drought effects were inconsistent over space and time. MODIS derived productivity deviated by more than four standard deviations in heavily impacted areas, but was well within the interannual variability in other areas. Growing season anomalies highlighted dramatic declines in productivity during the 2012 and 2013 growing seasons. However, large negative anomalies persisted in other areas during the 2014 growing season, indicating lag effects of drought. We are further investigating if the reduction in productivity is mediated by local biophysical properties. Our analysis identified spatially explicit patterns of ecosystem properties altered by severe drought which are consistent with field observations of sagebrush mortality. The results provide a theoretical framework for future field based investigation at multiple spatiotemporal scales.
Between-subject variability in asymmetry analysis of macular thickness.
Alluwimi, Muhammed S; Swanson, William H; Malinovsky, Victor E
2014-05-01
To investigate the use of asymmetry analysis to reduce between-subject variability of macular thickness measurements using spectral domain optical coherence tomography. Sixty-three volunteers (33 young subjects [aged 21 to 35 years] and 30 older subjects [aged 45 to 85 years]) free of eye disease were recruited. Macular images were gathered with the Spectralis optical coherence tomography. An overlay 24- by 24-degree grid was divided into five zones per hemifield, and asymmetry analysis was computed as the difference between superior and inferior zone thicknesses. We hypothesized that the lowest variation and the highest density of ganglion cells will be found approximately 3 to 6 degrees from the foveola, corresponding to zones 1 and 2. For each zone and age group, between-subject SDs were compared for retinal thickness versus asymmetry analysis using an F test. To account for repeated comparisons, p < 0.0125 was required for statistical significance. Axial length and corneal curvature were measured with an IOLMaster. For OD, asymmetry analysis reduced between-subject variability in zones 1 and 2 in both groups (F > 3.2, p < 0.001). Standard deviation for zone 1 dropped from 12.0 to 3.0 μm in the young group and from 11.7 to 2.6 μm in the older group. Standard deviation for zone 2 dropped from 13.6 to 5.3 μm in the young group and from 11.1 to 5.8 μm in the older group. Combining all subjects, neither retinal thickness nor asymmetry analysis showed a strong correlation with axial length or corneal curvature (R² < 0.01). Analysis for OS yielded the same pattern of results, as did asymmetry analyses between eyes (F > 3.8, p < 0.0001). Asymmetry analysis reduced between-subject variability in zones 1 and 2. Combining the five zones together produced a higher between-subject variation of the retinal thickness asymmetry analysis; thus, we encourage clinicians to be cautious when interpreting the asymmetry analysis printouts.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Singh, Manav Deep; Jain, Kanika
2017-11-01
To find out whether 30-2 Swedish Interactive Threshold Algorithm (SITA) Fast is comparable to 30-2 SITA Standard as a tool for perimetry among the patients with intracranial tumors. This was a prospective cross-sectional study involving 80 patients aged ≥18 years with imaging proven intracranial tumors and visual acuity better than 20/60. The patients underwent multiple visual field examinations using the two algorithms till consistent and repeatable results were obtained. A total of 140 eyes of 80 patients were analyzed. Almost 60% of patients undergoing perimetry with SITA Standard required two or more sessions to obtain consistent results, whereas the same could be obtained in 81.42% with SITA Fast in the first session itself. Of 140 eyes, 70 eyes had recordable field defects and the rest had no defects as detected by either of the two algorithms. Mean deviation (MD) (P = 0.56), pattern standard deviation (PSD) (P = 0.22), visual field index (P = 0.83) and number of depressed points at P < 5%, 2%, 1%, and 0.5% on MD and PSD probability plots showed no statistically significant difference between two algorithms. Bland-Altman test showed that considerable variability existed between two algorithms. Perimetry performed by SITA Standard and SITA Fast algorithm of Humphrey Field Analyzer gives comparable results among the patients of intracranial tumors. Being more time efficient and with a shorter learning curve, SITA Fast my be recommended as a standard test for the purpose of perimetry among these patients.
Zukowski, Lisa A; Christou, Evangelos A; Shechtman, Orit; Hass, Christopher J; Tillman, Mark D
2017-03-01
Wheelchair propulsion has been linked to overuse injuries regardless of propulsion style. Many aspects of the arcing (ARC) and semicircular (SEMI) propulsion styles have been compared, but differences in intracycle movement variability, which have been linked to overuse injuries, have not been examined. To explore how ARC and SEMI affect changes in intracycle wrist movement variability after a fatiguing bout of propulsion. Repeated measures crossover design. Wheelchair rollers and wheelchair fatigue course in a research laboratory. Twenty healthy, nondisabled adult men without previous wheelchair experience. Participants learned ARC and SEMI and used each to perform a wheelchair fatigue protocol. Thirty seconds of propulsion on rollers were recorded by motion-capture cameras before and after a fatigue protocol for each propulsion style on 2 testing days. Angular wrist orientations (flexion/extension and radial/ulnar deviation) and linear wrist trajectories (mediolateral direction) were computed, and intracycle movement variability was calculated as standard deviations of the detrended and filtered values during the push phase beginning and end. Paired samples t tests were used to compare ARC and SEMI based on the percent changes from pre- to postfatigue protocol. Both propulsion styles resulted in increased intracycle wrist movement variability postfatigue, but observed increases did not significantly differ between ARC and SEMI. This study evinces that intersubject variability exceeded average changes in intracycle wrist movement variability for both propulsion styles. Neither propulsion style resulting in a greater change in intracycle movement variability may suggest that no single propulsion style is ideal for everyone. The large intersubject variability may indicate that the propulsion style resulting in the smallest increase in intracycle movement variability after a fatiguing bout of propulsion may differ for each person and may help explain why wheelchair users self-select to use different propulsion styles. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Evaluation of Brazilian Sugarcane Bagasse Characterization: An Interlaboratory Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sluiter, Justin B.; Chum, Helena; Gomes, Absai C.
2016-05-01
This paper describes a study of the variability of measured composition for a single bulk sugarcane bagasse conducted across eight laboratories using similar analytical methods, with the purpose of determining the expected variation for compositional analysis performed by different laboratories. The results show good agreement of measured composition within a single laboratory, but greater variability when results are compared among laboratories. These interlaboratory variabilities do not seem to be associated with a specific method or technique or any single piece of instrumentation. The summary censored statistics provide mean values and pooled standard deviations as follows: total extractives 6.7% (0.6%), wholemore » ash 1.5% (0.2%), glucan 42.3% (1.2%), xylan 22.3% (0.5%), total lignin 21.3% (0.4%), and total mass closure 99.4% (2.9%).« less
Gamma ray sources observation with the ARGO-YBJ detector
NASA Astrophysics Data System (ADS)
Vernetto, S.; ARGO-YBJ Collaboration
2011-02-01
In this paper we report on the observations of TeV gamma ray sources performed by the air shower detector ARGO-YBJ. The objects studied in this work are the blazar Markarian 421 and the extended galactic source MGROJ1908+06, monitored during ~2 years of operation. Mrk421 has been detected by ARGO-YBJ with a statistical significance of ~11 standard deviations. The observed TeV emission was highly variable, showing large enhancements of the flux during active periods. The study of the spectral behaviour during flares revealed a positive correlation of the hardness with the flux, as already reported in the past by the Whipple telescope, suggesting that this is a long term property of the source. ARGO-YBJ observed a strong correlation between TeV gamma rays and the X-ray flux measured by RXTM/ASM and SWIFT/BAT during the whole period, with a time lag compatible with zero, supporting the one-zone SSC model to describe the emission mechanism. MGROJ1908+06 has been detected by ARGO-YBJ with ~5 standard deviation of significance. From our data the source appears extended and the measured extension is σext = 0.48° --> σext = 0.48° -0.28+0.26 --> -0.28+0.26, in agreement with a previous HESS observation. The average flux is in marginal agreement with that reported by MILAGRO, but significantly higher than that obtained by HESS, suggesting a possible flux variability.
Somatotype, training and performance in Ironman athletes.
Kandel, Michel; Baeyens, Jean Pierre; Clarys, Peter
2014-01-01
The aim of this study was to describe the physiques of Ironman athletes and the relationship between Ironman's performance, training and somatotype. A total of 165 male and 22 female competitors of the Ironman Switzerland volunteered in this study. Ten anthropometric dimensions were measured, and 12 training and history variables were recorded with a questionnaire. The variables were compared with the race performance. The somatotype was a strong predictor of Ironman performance (R=0.535; R(2)=0.286; sign. p<0.001) in male athletes. The endomorphy component was the most substantial predictor. Reductions in endomorphy by one standard deviation as well as an increased ectomorphy value by one standard deviation lead to significant and substantial improvement in Ironman performance (28.1 and 29.8 minutes, respectively). An ideal somatotype of 1.7-4.9-2.8 could be established. Age and quantitative training effort were not significant predictors on Ironman performance. In female athletes, no relationship between somatotype, training and performance was found. The somatotype of a male athlete defines for 28.6% variance in Ironman performance. Athletes not having an ideal somatotype of 1.7-4.9-2.8 could improve their performance by altering their somatotype. Lower rates in endomorphy, as well as higher rates in ectomorphy, resulted in a significant better race performance. The impact of somatotype was the most distinguished on the run discipline and had a much greater impact on the total race time than the quantitative training effort. These findings could not be found in female athletes.
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
Capture of activation during ventricular arrhythmia using distributed stimulation.
Meunier, Jason M; Ramalingam, Sanjiv; Lin, Shien-Fong; Patwardhan, Abhijit R
2007-04-01
Results of previous studies suggest that pacing strength stimuli can capture activation during ventricular arrhythmia locally near pacing sites. The existence of spatio-temporal distribution of excitable gap during arrhythmia suggests that multiple and timed stimuli delivered over a region may permit capture over larger areas. Our objective in this study was to evaluate the efficacy of using spatially distributed pacing (DP) to capture activation during ventricular arrhythmia. Data were obtained from rabbit hearts which were placed against a lattice of parallel wires through which biphasic pacing stimuli were delivered. Electrical activity was recorded optically. Pacing stimuli were delivered in sequence through the parallel wires starting with the wire closest to the apex and ending with one closest to the base. Inter-stimulus delay was based on conduction velocity. Time-frequency analysis of optical signals was used to determine variability in activation. A decrease in standard deviation of dominant frequencies of activation from a grid of locations that spanned the captured area and a concurrence with paced frequency were used as an index of capture. Results from five animals showed that the average standard deviation decreased from 0.81 Hz during arrhythmia to 0.66 Hz during DP at pacing cycle length of 125 ms (p = 0.03) reflecting decreased spatio-temporal variability in activation during DP. Results of time-frequency analysis during these pacing trials showed agreement between activation and paced frequencies. These results show that spatially distributed and timed stimulation can be used to modify and capture activation during ventricular arrhythmia.
A meta-analysis of the validity of FFQ targeted to adolescents.
Tabacchi, Garden; Filippi, Anna Rita; Amodio, Emanuele; Jemni, Monèm; Bianco, Antonino; Firenze, Alberto; Mammina, Caterina
2016-05-01
The present work is aimed at meta-analysing validity studies of FFQ for adolescents, to investigate their overall accuracy and variables that can affect it negatively. A meta-analysis of sixteen original articles was performed within the ASSO Project (Adolescents and Surveillance System in the Obesity prevention). The articles assessed the validity of FFQ for adolescents, compared with food records or 24 h recalls, with regard to energy and nutrient intakes. Pearson's or Spearman's correlation coefficients, means/standard deviations, kappa agreement, percentiles and mean differences/limits of agreement (Bland-Altman method) were extracted. Pooled estimates were calculated and heterogeneity tested for correlation coefficients and means/standard deviations. A subgroup analysis assessed variables influencing FFQ accuracy. An overall fair/high correlation between FFQ and reference method was found; a good agreement, measured through the intake mean comparison for all nutrients except sugar, carotene and K, was observed. Kappa values showed fair/moderate agreement; an overall good ability to rank adolescents according to energy and nutrient intakes was evidenced by data of percentiles; absolute validity was not confirmed by mean differences/limits of agreement. Interviewer administration mode, consumption interval of the previous year/6 months and high number of food items are major contributors to heterogeneity and thus can reduce FFQ accuracy. The meta-analysis shows that FFQ are accurate tools for collecting data and could be used for ranking adolescents in terms of energy and nutrient intakes. It suggests how the design and the validation of a new FFQ should be addressed.
Heiberg, Einar; Ugander, Martin; Engblom, Henrik; Götberg, Matthias; Olivecrona, Göran K; Erlinge, David; Arheden, Håkan
2008-02-01
Ethics committees approved human and animal study components; informed written consent was provided (prospective human study [20 men; mean age, 62 years]) or waived (retrospective human study [16 men, four women; mean age, 59 years]). The purpose of this study was to prospectively evaluate a clinically applicable method, accounting for the partial volume effect, to automatically quantify myocardial infarction from delayed contrast material-enhanced magnetic resonance images. Pixels were weighted according to signal intensity to calculate infarct fraction for each pixel. Mean bias +/- variability (or standard deviation), expressed as percentage left ventricular myocardium (%LVM), were -0.3 +/- 1.3 (animals), -1.2 +/- 1.7 (phantoms), and 0.3 +/- 2.7 (patients), respectively. Algorithm had lower variability than dichotomous approach (2.7 vs 7.7 %LVM, P < .01) and did not differ from interobserver variability for bias (P = .31) or variability (P = .38). The weighted approach provides automatic quantification of myocardial infarction with higher accuracy and lower variability than a dichotomous algorithm. (c) RSNA, 2007.
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
Analyzing Spatial and Temporal Variation in Precipitation Estimates in a Coupled Model
NASA Astrophysics Data System (ADS)
Tomkins, C. D.; Springer, E. P.; Costigan, K. R.
2001-12-01
Integrated modeling efforts at the Los Alamos National Laboratory aim to simulate the hydrologic cycle and study the impacts of climate variability and land use changes on water resources and ecosystem function at the regional scale. The integrated model couples three existing models independently responsible for addressing the atmospheric, land surface, and ground water components: the Regional Atmospheric Model System (RAMS), the Los Alamos Distributed Hydrologic System (LADHS), and the Finite Element and Heat Mass (FEHM). The upper Rio Grande Basin, extending 92,000 km2 over northern New Mexico and southern Colorado, serves as the test site for this model. RAMS uses nested grids to simulate meteorological variables, with the smallest grid over the Rio Grande having 5-km horizontal grid spacing. As LADHS grid spacing is 100 m, a downscaling approach is needed to estimate meteorological variables from the 5km RAMS grid for input into LADHS. This study presents daily and cumulative precipitation predictions, in the month of October for water year 1993, and an approach to compare LADHS downscaled precipitation to RAMS-simulated precipitation. The downscaling algorithm is based on kriging, using topography as a covariate to distribute the precipitation and thereby incorporating the topographical resolution achieved at the 100m-grid resolution in LADHS. The results of the downscaling are analyzed in terms of the level of variance introduced into the model, mean simulated precipitation, and the correlation between the LADHS and RAMS estimates. Previous work presented a comparison of RAMS-simulated and observed precipitation recorded at COOP and SNOTEL sites. The effects of downscaling the RAMS precipitation were evaluated using Spearman and linear correlations and by examining the variance of both populations. The study focuses on determining how the downscaling changes the distribution of precipitation compared to the RAMS estimates. Spearman correlations computed for the LADHS and RAMS cumulative precipitation reveal a disassociation over time, with R equal to 0.74 at day eight and R equal to 0.52 at day 31. Linear correlation coefficients (Pearson) returned a stronger initial correlation of 0.97, decreasing to 0.68. The standard deviations for the 2500 LADHS cells underlying each 5km RAMS cell range from 8 mm to 695 mm in the Sangre de Cristo Mountains and 2 mm to 112 mm in the San Luis Valley. Comparatively, the standard deviations of the RAMS estimates in these regions are 247 mm and 30 mm respectively. The LADHS standard deviations provide a measure of the variability introduced through the downscaling routine, which exceeds RAMS regional variability by a factor of 2 to 4. The coefficient of variation for the average LADHS grid cell values and the RAMS cell values in the Sangre de Cristo Mountains are 0.66 and 0.27, respectively, and 0.79 and 0.75 in the San Luis Valley. The coefficients of variation evidence the uniformity of the higher precipitation estimates in the mountains, especially for RAMS, and also the lower means and variability found in the valley. Additionally, Kolmogorov-Smirnov tests indicate clear spatial and temporal differences in mean simulated precipitation across the grid.
NASA Astrophysics Data System (ADS)
Maher, Nicola; Marotzke, Jochem
2017-04-01
Natural climate variability is found in observations, paleo-proxies, and climate models. Such climate variability can be intrinsic internal variability or externally forced, for example by changes in greenhouse gases or large volcanic eruptions. There are still questions concerning how external forcing, both natural (e.g., volcanic eruptions and solar variability) and anthropogenic (e.g., greenhouse gases and ozone) may excite both interannual modes of variability in the climate system. This project aims to address some of these problems, utilising the large ensemble of the MPI-ESM-LR climate model. In this study we investigate the statistics of four modes of interannual variability, namely the North Atlantic Oscillation (NAO), the Indian Ocean Dipole (IOD), the Southern Annular Mode (SAM) and the El Niño Southern Oscillation (ENSO). Using the 100-member ensemble of MPI-ESM-LR the statistical properties of these modes (amplitude and standard deviation) can be assessed over time. Here we compare the properties in the pre-industrial control run, historical run and future scenarios (RCP4.5, RCP2.6) and present preliminary results.
Cummings, Jorden A.; Hayes, Adele M.; Cardaciotto, LeeAnn; Newman, Cory F.
2011-01-01
Self-esteem variability is often associated with poor functioning. However, in disorders with entrenched negative views of self and in a context designed to challenge those views, variable self-esteem might represent a marker of change. We examined self-esteem variability in a sample of 27 patients with Avoidant and Obsessive-Compulsive Personality Disorders who received Cognitive Therapy (CT). A therapy coding system was used to rate patients’ positive and negative views of self expressed in the first ten sessions of a 52-week treatment. Ratings of negative (reverse scored) and positive view of self were summed to create a composite score for each session. Self-esteem variability was calculated as the standard deviation of self-esteem scores across sessions. More self-esteem variability predicted more improvement in personality disorder and depression symptoms at the end of treatment, beyond baseline and average self-esteem. Early variability in self-esteem, in this population and context, appeared to be a marker of therapeutic change. PMID:22923855
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
Donegan, Thomas M.
2018-01-01
Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266
Kim, Younggy; Walker, W Shane; Lawler, Desmond F
2012-05-01
In electrodialysis desalination, the boundary layer near ion-exchange membranes is the limiting region for the overall rate of ionic separation due to concentration polarization over tens of micrometers in that layer. Under high current conditions, this sharp concentration gradient, creating substantial ionic diffusion, can drive a preferential separation for certain ions depending on their concentration and diffusivity in the solution. Thus, this study tested a hypothesis that the boundary layer affects the competitive transport between di- and mono-valent cations, which is known to be governed primarily by the partitioning with cation-exchange membranes. A laboratory-scale electrodialyzer was operated at steady state with a mixture of 10mM KCl and 10mM CaCl(2) at various flow rates. Increased flows increased the relative calcium transport. A two-dimensional model was built with analytical solutions of the Nernst-Planck equation. In the model, the boundary layer thickness was considered as a random variable defined with three statistical parameters: mean, standard deviation, and correlation coefficient between the thicknesses of the two boundary layers facing across a spacer. Model simulations with the Monte Carlo method found that a greater calcium separation was achieved with a smaller mean, greater standard deviation, or more negative correlation coefficient. The model and experimental results were compared for the cationic transport number as well as the current and potential relationship. The mean boundary layer thickness was found to decrease from 40 to less than 10 μm as the superficial water velocity increased from 1.06 to 4.24 cm/s. The standard deviation was greater than the mean thickness at slower water velocities and smaller at faster water velocities. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.
2017-10-01
Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.
Arday, D R; Brundage, J F; Gardner, L I; Goldenbaum, M; Wann, F; Wright, S
1991-06-15
The authors conducted a population-based study to attempt to estimate the effect of human immunodeficiency virus type 1 (HIV-1) seropositivity on Armed Services Vocational Aptitude Battery test scores in otherwise healthy individuals with early HIV-1 infection. The Armed Services Vocational Aptitude Battery is a 10-test written multiple aptitude battery administered to all civilian applicants for military enlistment prior to serologic screening for HIV-1 antibodies. A total of 975,489 induction testing records containing both Armed Services Vocational Aptitude Battery and HIV-1 results from October 1985 through March 1987 were examined. An analysis data set (n = 7,698) was constructed by choosing five controls for each of the 1,283 HIV-1-positive cases, matched on five-digit ZIP code, and a multiple linear regression analysis was performed to control for demographic and other factors that might influence test scores. Years of education was the strongest predictor of test scores, raising an applicant's score on a composite test nearly 0.16 standard deviation per year. The HIV-1-positive effect on the composite score was -0.09 standard deviation (99% confidence interval -0.17 to -0.02). Separate regressions on each component test within the battery showed HIV-1 effects between -0.39 and +0.06 standard deviation. The two Armed Services Vocational Aptitude Battery component tests felt a priori to be the most sensitive to HIV-1-positive status showed the least decrease with seropositivity. Much of the variability in test scores was not predicted by either HIV-1 serostatus or the demographic and other factors included in the model. There appeared to be little evidence of a strong HIV-1 effect.
Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Thompson, Bruce
2009-01-01
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
30 CFR 74.8 - Measurement, accuracy, and reliability requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
Effect of Spatio-Temporal Variability of Rainfall on Stream flow Prediction of Birr Watershed
NASA Astrophysics Data System (ADS)
Demisse, N. S.; Bitew, M. M.; Gebremichael, M.
2012-12-01
The effect of rainfall variability on our ability to forecast flooding events was poorly studied in complex terrain region of Ethiopia. In order to establish relation between rainfall variability and stream flow, we deployed 24 rain gauges across Birr watershed. Birr watershed is a medium size mountainous watershed with an area of 3000 km2 and elevation ranging between 1435 m.a.s.l and 3400 m.a.s.l in the central Ethiopia highlands. One summer monsoon rainfall of 2012 recorded at high temporal scale of 15 minutes interval and stream flow recorded at an hourly interval in three sub-watershed locations representing different scales were used in this study. Based on the data obtained from the rain gauges and stream flow observations, we quantify extent of temporal and spatial variability of rainfall across the watershed using standard statistical measures including mean, standard deviation and coefficient of variation. We also establish rainfall-runoff modeling system using a physically distributed hydrological model: the Soil and Water Assessment Tool (SWAT) and examine the effect of rainfall variability on stream flow prediction. The accuracy of predicted stream flow is measured through direct comparison with observed flooding events. The results demonstrate the significance of relation between stream flow prediction and rainfall variability in the understanding of runoff generation mechanisms at watershed scale, determination of dominant water balance components, and effect of variability on accuracy of flood forecasting activities.
USL/DBMS NASA/PC R and D project C programming standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuichard, N.; Papale, D.
In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robustmore » method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.« less
Vieira, Carlos Felipe Delmondes; Lima, Márcia Maria Oliveira; Costa, Henrique Silveira; Diniz, Karen Marina Alves; Guião, João Paulo Lemos; Alves, Frederico Lopes; Maciel, Emílio Henrique; Brandao, Vanessa Gomes; Figueiredo, Pedro Henrique Scheidt
2016-06-01
The autonomic maneuvers are simple methods to evaluate autonomic balance, but the association between autonomic maneuvers and heart rate variability (HRV) in hemodialysis patients remains unknown. This study aimed to evaluate the correlation between HRV and respiratory sinus arrhythmia (RSA) and Valsalva maneuver (VM) indexes in hemodialysis patients and to compare two methods for RSA indexes acquisitions. Forty-eight volunteers on hemodialysis (66.7 % men) were evaluated by VM, RSA, and 24 h Holter monitoring. At the VM, the Valsalva index (VI) was the variable considered. In the RSA, the ratio and difference between the RR intervals of inspiratory and expiratory phase (E:I and E-I, respectively) were considered by traditional form (average of respiratory cycles) and independent respiratory cycles (E:Iindep and E-Iindep). The HRV indexes evaluated were standard deviation of all normal RR intervals (SDNN), standard deviation of sequential 5-min RR interval means (SDANN), root mean square of the successive differences (rMSSD) and percentage of adjacent RR intervals with difference of duration greater than 50 ms (pNN50). The SDNN, SDANN showed significant correlation with all classic indexes of RSA (E:I: r = 0.62, 0.55, respectively, E-I: r = 0.64, 0.57, respectively), E:Iindep (r = 0.59, 0.54, respectively), E-Iindep (r = 0.47, 0.43, respectively) and VI (r = 0.42, 0.34, respectively). Significant correlation of rMSSD with E:I (r = 0.37), E-I (r = 0.41) and E:Iindep (r = 0.34) was also observed. There was no association of any variable with pNN50. Have been show high values for all variables of independent cycles method (p < 0.05). The autonomic maneuvers, especially RSA, are useful methods to evaluate cardiac autonomic function in hemodialysis patients. The acquisition of the RSA index by independent cycles should not be used in this population.
NASA Astrophysics Data System (ADS)
Heiri, O.; Birks, H. J. B.; Brooks, S. J.; Velle, G.; Willassen, E.
An important aspect when applying organism-based palaeolimnological methods to sediment cores is the inherent variability of fossil assemblages within a lake basin. Subfossil chironomids in lake sediments have been used extensively to quantify past summer air and water temperatures. However, little is known on how heterogeneous fossil distribution affects these estimates. In an effort to assess this variability we took a total of 20 surface sediment samples each in three small and shallow (7-9 m wa- ter depth) Norwegian lakes. In every lake two transects of seven samples were taken from the centre of the lake towards the littoral and six samples in the deepest part of the lake basin. Although the fossil assemblages were generally very similar within a lake basin, there was - in all three lakes - a distinct shift in the abundances of chi- ronomid taxa towards the littoral (water depth explaining 10-18% of the total variance in the percentage data as assessed by a Detrended Canonical Correspondence Anal- ysis). When we applied to our data a quantitative chironomid-July air temperature transfer-function based on surface sediments from the deepest parts of 153 Norwegian lakes, the variability of reconstructed temperatures in our three study lakes was only slightly smaller in the 6 deep-water samples (standard deviations (SD) of 0.48, 0.52 and 0.58C) than in all the 20 samples (SD of 0.55, 0.56 and 0.59 C). Our results suggest that within-lake variability of subfossil chironomid assemblages can account for a significant part of the overall prediction error of the chironomid-July air tempera- ture model of 1.03C. Furthermore, the lack of a clear trend in inferred values towards the littoral and the similar standard deviation of the total samples as compared to the deep-water samples suggest that the Norwegian transfer-function, though calibrated on samples from the deepest part of the lake, may also be applicable to sediment cores from closer to the lake shore. It remains to be tested, however, if this holds true in deeper lakes than the ones sampled in our study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomita, Tomohiko; Yanai, Michio
The link between the Asian monsoon and the El Nino/Southern Oscillation (ENSO) has been demonstrated by a number of studies. This study examines two ENSO withdrawal periods and discusses if the Asian monsoon played a role in the differences between them. The 1986 event occurred in the later half of 1986 and retreated in 1988. The 1951 and 1991 events were similar to each other and seemed to continue to the second year after onset and not to have the clear La Nina phase after the events. In the central and eastern Pacific, three variables progress in phase as themore » ENSO cycle: sea surface temperature (SST), heat source (Q1), and divergence. Correlation coefficients were calculated and examined with the mean SST on the equator and with the standard deviation of the interannual components of SST. In the central and eastern Pacific, the standard deviation is large and three correlation coefficients are large (over 0.6). Strong air-sea interaction associated with ENSO cycle is deduced. In the Indian Ocean and the western Pacific, the correlation coefficients with SST become small rapidly, while the correlation coefficient between Q1 and the divergence is still large. The interannual variability of SSt may not be crucial for those of Q1 and of the divergence in this region because of the potential to generate well organized convection through the high mean SST. This suggests that various factors, such as effects from mid-latitudes, may modify the interannual variability in the region. To examine the effects of the Asian winter monsoon, the anomalous wind field at 850 hPa was investigated. The conditions of the Asian winter monsoon were quite different between the withdrawal periods in the 1986 and 1991 ENSO events. The Asian winter monsoon seems to be a factor to modify the ENSO cycle, especially in the retreat periods. In addition, the SST from the tropical Indian Ocean to western Pacific may be important for the modulation of the ENSO/monsoon system. 9 refs., 10 figs.« less
Vuichard, N.; Papale, D.
2015-07-13
In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robustmore » method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.« less
NASA Astrophysics Data System (ADS)
Scheifinger, Helfried; Menzel, Annette; Koch, Elisabeth; Peter, Christian; Ahas, Rein
2002-11-01
A data set of 17 phenological phases from Germany, Austria, Switzerland and Slovenia spanning the time period from 1951 to 1998 has been made available for analysis together with a gridded temperature data set (1° × 1° grid) and the North Atlantic Oscillation (NAO) index time series. The disturbances of the westerlies constitute the main atmospheric source for the temporal variability of phenological events in Europe. The trend, the standard deviation and the discontinuity of the phenological time series at the end of the 1980s can, to a great extent, be explained by the NAO. A number of factors modulate the influence of the NAO in time and space. The seasonal northward shift of the westerlies overlaps with the sequence of phenological spring phases, thereby gradually reducing its influence on the temporal variability of phenological events with progression of spring (temporal loss of influence). This temporal process is reflected by a pronounced decrease in trend and standard deviation values and common variability with the NAO with increasing year-day. The reduced influence of the NAO with increasing distance from the Atlantic coast is not only apparent in studies based on the data set of the International Phenological Gardens, but also in the data set of this study with a smaller spatial extent (large-scale loss of influence). The common variance between phenological and NAO time series displays a discontinuous drop from the European Atlantic coast towards the Alps. On a local and regional scale, mountainous terrain reduces the influence of the large-scale atmospheric flow from the Atlantic via a proposed decoupling mechanism. Valleys in mountainous terrain have the inclination to harbour temperature inversions over extended periods of time during the cold season, which isolate the valley climate from the large-scale atmospheric flow at higher altitudes. Most phenological stations reside at valley bottoms and are thus largely decoupled in their temporal variability from the influence of the westerly flow regime (local-scale loss of influence). This study corroborates an increasing number of similar investigations that find that vegetation does react in a sensitive way to variations of its atmospheric environment across various temporal and spatial scales.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
Babjack, Destiny L; Cernicky, Brandon; Sobotka, Andrew J; Basler, Lee; Struthers, Devon; Kisic, Richard; Barone, Kimberly; Zuccolotto, Anthony P
2015-09-01
Using differing computer platforms and audio output devices to deliver audio stimuli often introduces (1) substantial variability across labs and (2) variable time between the intended and actual sound delivery (the sound onset latency). Fast, accurate audio onset latencies are particularly important when audio stimuli need to be delivered precisely as part of studies that depend on accurate timing (e.g., electroencephalographic, event-related potential, or multimodal studies), or in multisite studies in which standardization and strict control over the computer platforms used is not feasible. This research describes the variability introduced by using differing configurations and introduces a novel approach to minimizing audio sound latency and variability. A stimulus presentation and latency assessment approach is presented using E-Prime and Chronos (a new multifunction, USB-based data presentation and collection device). The present approach reliably delivers audio stimuli with low latencies that vary by ≤1 ms, independent of hardware and Windows operating system (OS)/driver combinations. The Chronos audio subsystem adopts a buffering, aborting, querying, and remixing approach to the delivery of audio, to achieve a consistent 1-ms sound onset latency for single-sound delivery, and precise delivery of multiple sounds that achieves standard deviations of 1/10th of a millisecond without the use of advanced scripting. Chronos's sound onset latencies are small, reliable, and consistent across systems. Testing of standard audio delivery devices and configurations highlights the need for careful attention to consistency between labs, experiments, and multiple study sites in their hardware choices, OS selections, and adoption of audio delivery systems designed to sidestep the audio latency variability issue.
What is the uncertainty principle of non-relativistic quantum mechanics?
NASA Astrophysics Data System (ADS)
Riggs, Peter J.
2018-05-01
After more than ninety years of discussions over the uncertainty principle, there is still no universal agreement on what the principle states. The Robertson uncertainty relation (incorporating standard deviations) is given as the mathematical expression of the principle in most quantum mechanics textbooks. However, the uncertainty principle is not merely a statement of what any of the several uncertainty relations affirm. It is suggested that a better approach would be to present the uncertainty principle as a statement about the probability distributions of incompatible variables and the resulting restrictions on quantum states.
Earth Global Reference Atmospheric Model (GRAM) Overview and Updates: DOLWG Meeting
NASA Technical Reports Server (NTRS)
White, Patrick
2017-01-01
What is Earth-GRAM (Global Reference Atmospheric Model): Provides monthly mean and standard deviation for any point in atmosphere - Monthly, Geographic, and Altitude Variation; Earth-GRAM is a C++ software package - Currently distributed as Earth-GRAM 2016; Atmospheric variables included: pressure, density, temperature, horizontal and vertical winds, speed of sound, and atmospheric constituents; Used by engineering community because of ability to create dispersions in atmosphere at a rapid runtime - Often embedded in trajectory simulation software; Not a forecast model; Does not readily capture localized atmospheric effects.
Mechanism-Based Design for High-Temperature, High-Performance Composites. Book 3.
1997-09-01
l(e-ß):(e-ß)--4(e:ß) 2 = el3 + -4(en-e33f, (77) 7 2 = 62:a-(e:a)2 = e?2 + 4, (78) where n = e2, ß = I-nn = eiei +e3e3, and the Cartesian...relation, the particles most susceptible to fracture are those at the larger size range of the population . Thus, with increasing standard deviation of...strength variability is associated exclusively with a single population of flaws. The second is based on comparisons of mean strengths of two or more
Dowry Deaths: Response to Weather Variability in India.
Sekhri, Sheetal; Storeygard, Adam
2014-11-01
We examine the effect of rainfall shocks on dowry deaths using data from 583 Indian districts for 2002-2007. We find that a one standard deviation decline in annual rainfall from the local mean increases reported dowry deaths by 7.8 percent. Wet shocks have no apparent effect. We examine patterns of other crimes to investigate whether an increase in general unrest during economic downturns explains the results but do not find supportive evidence. Women's political representation in the national parliament has no apparent mitigating effect on dowry deaths.
Dowry Deaths: Response to Weather Variability in India☆
Sekhri, Sheetal; Storeygard, Adam
2014-01-01
We examine the effect of rainfall shocks on dowry deaths using data from 583 Indian districts for 2002–2007. We find that a one standard deviation decline in annual rainfall from the local mean increases reported dowry deaths by 7.8 percent. Wet shocks have no apparent effect. We examine patterns of other crimes to investigate whether an increase in general unrest during economic downturns explains the results but do not find supportive evidence. Women’s political representation in the national parliament has no apparent mitigating effect on dowry deaths. PMID:25386044
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Simulated laser fluorosensor signals from subsurface chlorophyll distributions
NASA Technical Reports Server (NTRS)
Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.
1986-01-01
A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.
Prediction of moisture variation during composting process: A comparison of mathematical models.
Wang, Yongjiang; Ai, Ping; Cao, Hongliang; Liu, Zhigang
2015-10-01
This study was carried out to develop and compare three models for simulating the moisture content during composting. Model 1 described changes in water content using mass balance, while Model 2 introduced a liquid-gas transferred water term. Model 3 predicted changes in moisture content without complex degradation kinetics. Average deviations for Model 1-3 were 8.909, 7.422 and 5.374 kg m(-3) while standard deviations were 10.299, 8.374 and 6.095, respectively. The results showed that Model 1 is complex and involves more state variables, but can be used to reveal the effect of humidity on moisture content. Model 2 tested the hypothesis of liquid-gas transfer and was shown to be capable of predicting moisture content during composting. Model 3 could predict water content well without considering degradation kinetics. Copyright © 2015 Elsevier Ltd. All rights reserved.
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
Continuous performance task in ADHD: Is reaction time variability a key measure?
Levy, Florence; Pipingas, Andrew; Harris, Elizabeth V; Farrow, Maree; Silberstein, Richard B
2018-01-01
To compare the use of the Continuous Performance Task (CPT) reaction time variability (intraindividual variability or standard deviation of reaction time), as a measure of vigilance in attention-deficit hyperactivity disorder (ADHD), and stimulant medication response, utilizing a simple CPT X-task vs an A-X-task. Comparative analyses of two separate X-task vs A-X-task data sets, and subgroup analyses of performance on and off medication were conducted. The CPT X-task reaction time variability had a direct relationship to ADHD clinician severity ratings, unlike the CPT A-X-task. Variability in X-task performance was reduced by medication compared with the children's unmedicated performance, but this effect did not reach significance. When the coefficient of variation was applied, severity measures and medication response were significant for the X-task, but not for the A-X-task. The CPT-X-task is a useful clinical screening test for ADHD and medication response. In particular, reaction time variability is related to default mode interference. The A-X-task is less useful in this regard.
[Blood pressure variability and left ventricular hypertrophy in arterial hypertension].
Amodeo, C; Martins, S M; Silva Júnior, O; Barros, L M; Batlouni, M; Sousa, J E
1993-05-01
To evaluate the left ventricular hypertrophy correlation with blood pressure variability during day and night time as well as throughout the 24h period. Fifteen patients with mild to moderate essential hypertension underwent to bi-dimensional echocardiographic study and to 24h ambulatory blood pressure monitorization. Left ventricular mass was calculated according to previous validated formulas. The standard deviation of the mean blood pressures during day-time, night-time and 24h period was taken as blood pressure variability indices. The mean age of the group was 42 years old; 9 patients were male and all were white. This study showed that only the systolic and diastolic blood pressure variability during the 24h period correlated significantly with left ventricular mass, (r = 0.53 and p < 0.05; r = 0.58 and p < 0.05 respectively). There was no significant correlation of the day-time and night-time pressures variability with left ventricular mass. The systolic and diastolic blood pressure variability during the 24h period may be one of the many determinants of left ventricular hypertrophy in patients with mild to moderate hypertension.
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...
Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions
1981-02-01
the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.
McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B
2017-02-01
We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.
Ground-based total ozone column measurements and their diurnal variability
NASA Astrophysics Data System (ADS)
Silva, Abel A.
2013-07-01
Brewer spectrophotometers were set up in three tropical sites of South America (in the Bolivian Altiplano and seashore and biomass burning areas of Brazil) to measure the total ozone column (TOC). Only TOC measurements with uncertainties ≤1% (1σ) were considered. Typically, the standard deviation for the diurnal sets of measurements was predominantly ≤1% for two of these sites. The average variability in TOC ranged from 6.3 Dobson units (DU) to 16.8 DU, and the largest variability reached 54.3 DU. Comparisons between ground-based and satellite (Total Ozone Mapping Spectrometer (TOMS)) data showed good agreement with coefficients of determination ≤0.83. However, the quality of the ground-based measurements was affected by the weather condition, especially for one of the sites. Visual observation of the sky from the ground during the measurements with one of the Brewers added to the satellite data of reflectivity and aerosol index supports that statement.
Garrido-López, Alvaro; Esquiu, Vanesa; Tena, María Teresa
2006-08-18
A pressurized fluid extraction (PFE) and gas chromatography-flame ionization detection (GC-FID) method is proposed to determine the slip agents in polyethylene (PE) films. The study of PFE variables was performed using a fractional factorial design (FFD) for screening and a central composite design (CCD) for optimizing the main variables obtained from the Pareto charts. The variables that were studied include temperature, static time, percentage of cyclohexane and the number of extraction cycles. The final condition selected was pure isopropanol (two times) at 105 degrees C for 16min. The recovery of spiked oleamide and erucamide was around 100%. The repeatability of the method was between 9.6% for oleamide and 8% for erucamide, expressed as relative standard deviation. Finally, the method was applied to determine oleamide and erucamide in several polyethylene films and the results were statistically equal to those obtained by pyrolysis and gas-phase chemiluminescence (CL).
Violation of Leggett-type inequalities in the spin-orbit degrees of freedom of a single photon
NASA Astrophysics Data System (ADS)
Cardano, Filippo; Karimi, Ebrahim; Marrucci, Lorenzo; de Lisio, Corrado; Santamato, Enrico
2013-09-01
We report the experimental violation of Leggett-type inequalities for a hybrid entangled state of spin and orbital angular momentum of a single photon. These inequalities give a physical criterion to verify the possible validity of a class of hidden-variable theories, originally named “crypto nonlocal,” that are not excluded by the violation of Bell-type inequalities. In our case, the tested theories assume the existence of hidden variables associated with independent degrees of freedom of the same particle, while admitting the possibility of an influence between the two measurements, i.e., the so-called contextuality of observables. We observe a violation of the Leggett inequalities for a range of experimental inputs, with a maximum violation of seven standard deviations, thus ruling out this class of hidden-variable models with a high level of confidence.
Orpin, Alan R; Ridd, Peter V; Thomas, Séverine; Anthony, Kenneth R N; Marshall, Paul; Oliver, Jamie
2004-10-01
Coastal development activities can cause local increases in turbidity and sedimentation. This study characterises the spatial and temporal variability of turbidity near an inshore fringing coral reef in the central Great Barrier Reef, under a wide range of natural conditions. Based on the observed natural variability, we outline a risk management scheme to minimise the impact of construction-related turbidity increases. Comparison of control and impact sites proved unusable for real-time management of turbidity risks. Instead, we suggest using one standard deviation from ambient conditions as a possible conservative upper limit of an acceptable projected increase in turbidity. In addition, the use of regional weather forecast as a proxy for natural turbidity is assessed. This approach is simple and cheap but also has limitations in very rough conditions, when an anthropogenic turbidity increase could prove fatal to corals that are already stressed under natural conditions.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Clemente-Suárez, Vicente Javier; Arroyo-Toledo, Juan Jaime
2018-01-25
The aim of the present research was to analyze the autonomic response in a group of trained swimmers before and after conducting a 4-week period of high-intensity interval training (HIT). Heart rate variability was analyzed in 14 swimmers (16.2 ± 2.6 years, 169.1 ± 10.2 cm and 61.3 ± 9.9 kg) in basal condition and during a HIT session before and after completing a training period. The HIT session that was evaluated consisted of: 16 × 25 m maximum speed, resting 30 s between sets. Participants combined aerobic training with tethered swimming and HIT sessions three times per week in a period of 4 weeks. Results showed a significantly decrease (p < 0.05) of the standard deviation of the NN intervals (SDNN), the standard deviation of differences between adjacent NN intervals (SDSD), the number of successive difference of intervals which differ by more than 50 ms (NN50), after the training period. Results showed a higher parasympathetic activation besides improvements in autonomic adaptation after HIT training period.
Huang, Emily; Chern, Hueylan; O'Sullivan, Patricia; Cook, Brian; McDonald, Erik; Palmer, Barnard; Liu, Terrence; Kim, Edward
2014-10-01
Knot tying is a fundamental and crucial surgical skill. We developed a kinesthetic pedagogical approach that increases precision and economy of motion by explicitly teaching suture-handling maneuvers and studied its effects on novice performance. Seventy-four first-year medical students were randomized to learn knot tying via either the traditional or the novel "kinesthetic" method. After 1 week of independent practice, students were videotaped performing 4 tying tasks. Three raters scored deidentified videos using a validated visual analog scale. The groups were compared using analysis of covariance with practice knots as a covariate and visual analog scale score (range, 0 to 100) as the dependent variable. Partial eta-square was calculated to indicate effect size. Overall rater reliability was .92. The kinesthetic group scored significantly higher than the traditional group for individual tasks and overall, controlling for practice (all P < .004). The kinesthetic overall mean was 64.15 (standard deviation = 16.72) vs traditional 46.31 (standard deviation = 16.20; P < .001; effect size = .28). For novices, emphasizing kinesthetic suture handling substantively improved performance on knot tying. We believe this effect can be extrapolated to more complex surgical skills. Copyright © 2014 Elsevier Inc. All rights reserved.
Xiong, Qingang; Ramirez, Emilio; Pannala, Sreekanth; ...
2015-10-09
The impact of bubbling bed hydrodynamics on temporal variations in the exit tar yield for biomass fast pyrolysis was investigated using computational simulations of an experimental laboratory-scale reactor. A multi-fluid computational fluid dynamics model was employed to simulate the differential conservation equations in the reactor, and this was combined with a multi-component, multi-step pyrolysis kinetics scheme for biomass to account for chemical reactions. The predicted mean tar yields at the reactor exit appear to match corresponding experimental observations. Parametric studies predicted that increasing the fluidization velocity should improve the mean tar yield but increase its temporal variations. Increases in themore » mean tar yield coincide with reducing the diameter of sand particles or increasing the initial sand bed height. However, trends in tar yield variability are more complex than the trends in mean yield. The standard deviation in tar yield reaches a maximum with changes in sand particle size. As a result, the standard deviation in tar yield increases with the increases in initial bed height in freely bubbling state, while reaches a maximum in slugging state.« less
Tracked ultrasound calibration studies with a phantom made of LEGO bricks
NASA Astrophysics Data System (ADS)
Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor
2014-03-01
In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.
Antarctic Surface Temperatures Using Satellite Infrared Data from 1979 Through 1995
NASA Technical Reports Server (NTRS)
Comiso, Josefino C.; Stock, Larry
1997-01-01
The large scale spatial and temporal variations of surface ice temperature over the Antarctic region are studied using infrared data derived from the Nimbus-7 Temperature Humidity Infrared Radiometer (THIR) from 1979 through 1985 and from the NOAA Advanced Very High Resolution Radiometer (AVHRR) from 1984 through 1995. Enhanced techniques suitable for the polar regions for cloud masking and atmospheric correction were used before converting radiances to surface temperatures. The observed spatial distribution of surface temperature is highly correlated with surface ice sheet topography and agrees well with ice station temperatures with 2K to 4K standard deviations. The average surface ice temperature over the entire continent fluctuates by about 30K from summer to winter while that over the Antarctic Plateau varies by about 45K. Interannual fluctuations of the coldest interannual variations in surface temperature are highest at the Antarctic Plateau and the ice shelves (e.g., Ross and Ronne) with a periodic cycle of about 5 years and standard deviations of about 11K and 9K, respectively. Despite large temporal variability, however, especially in some regions, a regression analysis that includes removal of the seasonal cycle shows no apparent trend in temperature during the period 1979 through 1995.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Renz, Erik; Hackney, Madeleine; Hall, Courtney
2016-01-01
Intraocular lenses (IOLs) provide distance and near refraction and are becoming the standard for cataract surgery. Multifocal glasses increase variability of toe clearance in older adults navigating stairs and increase fall risk; however, little is known about the biomechanics of stair navigation in individuals with multifocal IOLs. This study compared clearance while ascending and descending stairs in individuals with monofocal versus multifocal IOLs. Eight participants with multifocal IOLs (4 men, 4 women; mean age = 66.5 yr, standard deviation [SD] = 6.26) and fifteen male participants with monofocal IOLs (mean age = 69.9 yr, SD = 6.9) underwent vision and mobility testing. Motion analysis recorded kinematic and custom software-calculated clearances in three-dimensional space. No significant differences were found between groups on minimum clearance or variability. Clearance differed for ascending versus descending stairs: the first step onto the stair had the greatest toe clearance during ascent, whereas the final step to the floor had the greatest heel clearance during descent. This preliminary study indicates that multifocal IOLs have similar biomechanic characteristics to monofocal IOLs. Given that step characteristics are related to fall risk, we can tentatively speculate that multifocal IOLs may carry no additional fall risk.
Whiteley, Greg S; Derry, Chris; Glasbey, Trevor; Fahey, Paul
2015-06-01
To investigate the reliability of commercial ATP bioluminometers and to document precision and variability measurements using known and quantitated standard materials. Four commercially branded ATP bioluminometers and their consumables were subjected to a series of controlled studies with quantitated materials in multiple repetitions of dilution series. The individual dilutions were applied directly to ATP swabs. To assess precision and reproducibility, each dilution step was tested in triplicate or quadruplicate and the RLU reading from each test point was recorded. Results across the multiple dilution series were normalized using the coefficient of variation. The results for pure ATP and bacterial ATP from suspensions of Staphylococcus epidermidis and Pseudomonas aeruginosa are presented graphically. The data indicate that precision and reproducibility are poor across all brands tested. Standard deviation was as high as 50% of the mean for all brands, and in the field users are not provided any indication of this level of imprecision. The variability of commercial ATP bioluminometers and their consumables is unacceptably high with the current technical configuration. The advantage of speed of response is undermined by instrument imprecision expressed in the numerical scale of relative light units (RLU).
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
Molina, Oswaldo; Saldarriaga, Victor
2017-02-01
The discussion on the effects of climate change on human activity has primarily focused on how increasing temperature levels can impair human health. However, less attention has been paid to the effect of increased climate variability on health. We investigate how in utero exposure to temperature variability, measured as the fluctuations relative to the historical local temperature mean, affects birth outcomes in the Andean region. Our results suggest that exposure to a temperate one standard deviation relative to the municipality's long-term temperature mean during pregnancy reduces birth weight by 20g. and increases the probability a child is born with low birth weight by a 0.7 percentage point. We also explore potential channels driving our results and find some evidence that increased temperature variability can lead to a decrease in health care and increased food insecurity during pregnancy. Copyright © 2016 Elsevier B.V. All rights reserved.
Integrating Solar PV in Utility System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, A.; Botterud, A.; Wu, J.
2013-10-31
This study develops a systematic framework for estimating the increase in operating costs due to uncertainty and variability in renewable resources, uses the framework to quantify the integration costs associated with sub-hourly solar power variability and uncertainty, and shows how changes in system operations may affect these costs. Toward this end, we present a statistical method for estimating the required balancing reserves to maintain system reliability along with a model for commitment and dispatch of the portfolio of thermal and renewable resources at different stages of system operations. We estimate the costs of sub-hourly solar variability, short-term forecast errors, andmore » day-ahead (DA) forecast errors as the difference in production costs between a case with “realistic” PV (i.e., subhourly solar variability and uncertainty are fully included in the modeling) and a case with “well behaved” PV (i.e., PV is assumed to have no sub-hourly variability and can be perfectly forecasted). In addition, we highlight current practices that allow utilities to compensate for the issues encountered at the sub-hourly time frame with increased levels of PV penetration. In this analysis we use the analytical framework to simulate utility operations with increasing deployment of PV in a case study of Arizona Public Service Company (APS), a utility in the southwestern United States. In our analysis, we focus on three processes that are important in understanding the management of PV variability and uncertainty in power system operations. First, we represent the decisions made the day before the operating day through a DA commitment model that relies on imperfect DA forecasts of load and wind as well as PV generation. Second, we represent the decisions made by schedulers in the operating day through hour-ahead (HA) scheduling. Peaking units can be committed or decommitted in the HA schedules and online units can be redispatched using forecasts that are improved relative to DA forecasts, but still imperfect. Finally, we represent decisions within the operating hour by schedulers and transmission system operators as real-time (RT) balancing. We simulate the DA and HA scheduling processes with a detailed unit-commitment (UC) and economic dispatch (ED) optimization model. This model creates a least-cost dispatch and commitment plan for the conventional generating units using forecasts and reserve requirements as inputs. We consider only the generation units and load of the utility in this analysis; we do not consider opportunities to trade power with neighboring utilities. We also do not consider provision of reserves from renewables or from demand-side options. We estimate dynamic reserve requirements in order to meet reliability requirements in the RT operations, considering the uncertainty and variability in load, solar PV, and wind resources. Balancing reserve requirements are based on the 2.5th and 97.5th percentile of 1-min deviations from the HA schedule in a previous year. We then simulate RT deployment of balancing reserves using a separate minute-by-minute simulation of deviations from the HA schedules in the operating year. In the simulations we assume that balancing reserves can be fully deployed in 10 min. The minute-by-minute deviations account for HA forecasting errors and the actual variability of the load, wind, and solar generation. Using these minute-by-minute deviations and deployment of balancing reserves, we evaluate the impact of PV on system reliability through the calculation of the standard reliability metric called Control Performance Standard 2 (CPS2). Broadly speaking, the CPS2 score measures the percentage of 10-min periods in which a balancing area is able to balance supply and demand within a specific threshold. Compliance with the North American Electric Reliability Corporation (NERC) reliability standards requires that the CPS2 score must exceed 90% (i.e., the balancing area must maintain adequate balance for 90% of the 10-min periods). The combination of representing DA forecast errors in the DA commitments, using 1-min PV data to simulate RT balancing, and estimates of reliability performance through the CPS2 metric, all factors that are important to operating systems with increasing amounts of PV, makes this study unique in its scope.« less
NASA Astrophysics Data System (ADS)
Johnstone, Doug; Herczeg, Gregory J.; Mairs, Steve; Hatchell, Jennifer; Bower, Geoffrey C.; Kirk, Helen; Lane, James; Bell, Graham S.; Graves, Sarah; Aikawa, Yuri; Chen, Huei-Ru Vivien; Chen, Wen-Ping; Kang, Miju; Kang, Sung-Ju; Lee, Jeong-Eun; Morata, Oscar; Pon, Andy; Scicluna, Peter; Scholz, Aleks; Takahashi, Satoko; Yoo, Hyunju; The JCMT Transient Team
2018-02-01
We analyze results from the first 18 months of monthly submillimeter monitoring of eight star-forming regions in the JCMT Transient Survey. In our search for stochastic variability in 1643 bright peaks, only the previously identified source, EC 53, shows behavior well above the expected measurement uncertainty. Another four sources—two disks and two protostars—show moderately enhanced standard deviations in brightness, as expected for stochastic variables. For the two protostars, this apparent variability is the result of single epochs that are much brighter than the mean. In our search for secular brightness variations that are linear in time, we measure the fractional brightness change per year for 150 bright peaks, 50 of which are protostellar. The ensemble distribution of slopes is well fit by a normal distribution with σ ∼ 0.023. Most sources are not rapidly brightening or fading at submillimeter wavelengths. Comparison against time-randomized realizations shows that the width of the distribution is dominated by the uncertainty in the individual brightness measurements of the sources. A toy model for secular variability reveals that an underlying Gaussian distribution of linear fractional brightness change σ = 0.005 would be unobservable in the present sample, whereas an underlying distribution with σ = 0.02 is ruled out. Five protostellar sources, 10% of the protostellar sample, are found to have robust secular measures deviating from a constant flux. The sensitivity to secular brightness variations will improve significantly with a sample over a longer time duration, with an improvement by factor of two expected by the conclusion of our 36 month survey.
A single blind randomized control trial on support groups for Chinese persons with mild dementia.
Young, Daniel K W; Kwok, Timothy C Y; Ng, Petrus Y N
2014-01-01
Persons with mild dementia experience multiple losses and manifest depressive symptoms. This research study aimed to evaluate the effectiveness of a support group led by a social worker for Chinese persons with mild dementia. Participants were randomly assigned to either a ten-session support group or a control group. Standardized assessment tools were used for data collection at pretreatment and post-treatment periods by a research assistant who was kept blind to the group assignment of the participants. Upon completion of the study, 20 treatment group participants and 16 control group participants completed all assessments. At baseline, the treatment and control groups did not show any significant difference on all demographic variables, as well as on all baseline measures; over one-half (59%) of all the participants reported having depression, as assessed by a Chinese Geriatric Depression Scale score ≥8. After completing the support group, the depressive mood of the treatment group participants reduced from 8.83 (standard deviation =2.48) to 7.35 (standard deviation =2.18), which was significant (Wilcoxon signed-rank test; P=0.017, P<0.05), while the control group's participants did not show any significant change. This present study supports the efficacy and effectiveness of the support group for persons with mild dementia in Chinese society. In particular, this present study shows that a support group can reduce depressive symptoms for participants.
Rottman, Benjamin M; Hastie, Reid
2016-06-01
Making judgments by relying on beliefs about the causal relationships between events is a fundamental capacity of everyday cognition. In the last decade, Causal Bayesian Networks have been proposed as a framework for modeling causal reasoning. Two experiments were conducted to provide comprehensive data sets with which to evaluate a variety of different types of judgments in comparison to the standard Bayesian networks calculations. Participants were introduced to a fictional system of three events and observed a set of learning trials that instantiated the multivariate distribution relating the three variables. We tested inferences on chains X1→Y→X2, common cause structures X1←Y→X2, and common effect structures X1→Y←X2, on binary and numerical variables, and with high and intermediate causal strengths. We tested transitive inferences, inferences when one variable is irrelevant because it is blocked by an intervening variable (Markov Assumption), inferences from two variables to a middle variable, and inferences about the presence of one cause when the alternative cause was known to have occurred (the normative "explaining away" pattern). Compared to the normative account, in general, when the judgments should change, they change in the normative direction. However, we also discuss a few persistent violations of the standard normative model. In addition, we evaluate the relative success of 12 theoretical explanations for these deviations. Copyright © 2016 Elsevier Inc. All rights reserved.
Garbarino, John R.
1999-01-01
The inductively coupled plasma?mass spectrometric (ICP?MS) methods have been expanded to include the determination of dissolved arsenic, boron, lithium, selenium, strontium, thallium, and vanadium in filtered, acidified natural water. Method detection limits for these elements are now 10 to 200 times lower than by former U.S. Geological Survey (USGS) methods, thus providing lower variability at ambient concentrations. The bias and variability of the method was determined by using results from spike recoveries, standard reference materials, and validation samples. Spike recoveries at 5 to 10 times the method detection limit and 75 micrograms per liter in reagent-water, surface-water, and groundwater matrices averaged 93 percent for seven replicates, although selected elemental recoveries in a ground-water matrix with an extremely high iron sulfate concentration were negatively biased by 30 percent. Results for standard reference materials were within 1 standard deviation of the most probable value. Statistical analysis of the results from about 60 filtered, acidified natural-water samples indicated that there was no significant difference between ICP?MS and former USGS official methods of analysis.
Al Shafouri, N; Narvey, M; Srinivasan, G; Vallance, J; Hansen, G
2015-01-01
In neonatal hypoxic ischemic encephalopathy (HIE), hypo- and hyperglycemia have been associated with poor outcomes. However, glucose variability has not been reported in this population. To examine the association between serum glucose variability within the first 24 hours and two-year neurodevelopmental outcomes in neonates cooled for HIE. In this retrospective cohort study, glucose, clinical and demographic data were documented from 23 term newborns treated with whole body therapeutic hypothermia. Severe neurodevelopmental outcomes from planned two-year assessments were defined as the presence of any one of the following: Gross Motor Function Classification System levels 3 to 5, Bayley III Motor Standard Score <70, Bayley III Language Score <70 and Bayley III Cognitive Standard Score <70. The neurodevelopmental outcomes from 8 of 23 patients were considered severe, and this group demonstrated a significant increase of mean absolute glucose (MAG) change (-0.28 to -0.03, 95% CI, p = 0.032). There were no significant differences between outcome groups with regards to number of patients with hyperglycemic means, one or multiple hypo- or hyperglycemic measurement(s). There were also no differences between both groups with mean glucose, although mean glucose standard deviation was approaching significance. Poor neurodevelopmental outcomes in whole body cooled HIE neonates are significantly associated with MAG changes. This information may be relevant for prognostication and potential management strategies.
Long-term variability in the date of monsoon onset over western India
NASA Astrophysics Data System (ADS)
Adamson, George C. D.; Nash, David J.
2013-06-01
The date of onset of the southwest monsoon in western India is critical for farmers as it influences the timing of crop plantation and the duration of the summer rainy season. Identifying long-term variability in the date of monsoon onset is difficult, however, as onset dates derived from the reanalysis of instrumental rainfall data are only available for the region from 1879. This study uses documentary evidence and newly uncovered instrumental data to reconstruct annual monsoon onset dates for western India for the period 1781-1878, extending the existing record by 97 years. The mean date of monsoon onset over the Mumbai (Bombay) area during the reconstruction period was 10 June with a standard deviation of 6.9 days. This is similar to the mean and standard deviation of the date of monsoon onset derived from instrumental data for the twentieth century. The earliest identified onset date was 23 May (in 1802 and 1839) and the latest 22 June (in 1825). The longer-term perspective provided by this study suggests that the climatic regime that governs monsoon advance over western India did not change substantially from 1781 to 1955. Monsoon onset over Mumbai has occurred at a generally later date since this time. Our results indicate that this change is unprecedented during the last 230 years. Following a discussion of the results, the nature of the relationship between the date of monsoon onset and the El Niño-Southern Oscillation is discussed. This relationship is shown to have been stable since 1781.
NASA Astrophysics Data System (ADS)
Fredriksen, H. B.; Løvsletten, O.; Rypdal, M.; Rypdal, K.
2014-12-01
Several research groups around the world collect instrumental temperature data and combine them in different ways to obtain global gridded temperature fields. The three most well known datasets are HadCRUT4 produced by the Climatic Research Unit and the Met Office Hadley Centre in UK, one produced by NASA GISS, and one produced by NOAA. Recently Berkeley Earth has also developed a gridded dataset. All these four will be compared in our analysis. The statistical properties we will focus on are the standard deviation and the Hurst exponent. These two parameters are sufficient to describe the temperatures as long-range memory stochastic processes; the standard deviation describes the general fluctuation level, while the Hurst exponent relates the strength of the long-term variability to the strength of the short-term variability. A higher Hurst exponent means that the slow variations are stronger compared to the fast, and that the autocovariance function will have a stronger tail. Hence the Hurst exponent gives us information about the persistence or memory of the process. We make use of these data to show that data averaged over a larger area exhibit higher Hurst exponents and lower variance than data averaged over a smaller area, which provides information about the relationship between temporal and spatial correlations of the temperature fluctuations. Interpolation in space has some similarities with averaging over space, although interpolation is more weighted towards the measurement locations. We demonstrate that the degree of spatial interpolation used can explain some differences observed between the variances and memory exponents computed from the various datasets.
Evaluation of the Ozone Fields in NASA's MERRA-2 Reanalysis
NASA Technical Reports Server (NTRS)
Wargan, Krzysztof; Labow, Gordon; Frith, Stacey; Pawson, Steven; Livesey, Nathaniel; Partyka, Gary
2017-01-01
We describe and assess the quality of the assimilated ozone product from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2) produced at NASAs Global Modeling and Assimilation Office (GMAO) spanning the time period from 1980 to present. MERRA-2 assimilates partial column ozone retrievals from a series of Solar Backscatter Ultraviolet (SBUV) radiometers on NASA and NOAA spacecraft between January 1980 and September 2004; starting in October 2004 retrieved ozone profiles from the Microwave Limb Sounder (MLS) and total column ozone from the Ozone Monitoring Instrument on NASAs EOS Aura satellite are assimilated. We compare the MERRA-2 ozone with independent satellite and ozonesonde data focusing on the representation of the spatial and temporal variability of stratospheric and upper tropospheric ozone and on implications of the change in the observing system from SBUV to EOS Aura. The comparisons show agreement within 10 (standard deviation of the difference) between MERRA-2 profiles and independent satellite data in most of the stratosphere. The agreement improves after 2004 when EOS Aura data are assimilated. The standard deviation of the differences between the lower stratospheric and upper tropospheric MERRA-2 ozone and ozonesondes is 11.2 and 24.5, respectively, with correlations of 0.8 and above, indicative of a realistic representation of the near-tropopause ozone variability in MERRA-2. The agreement improves significantly in the EOS Aura period, however MERRA-2 is biased low in the upper troposphere with respect to the ozonesondes. Caution is recommended when using MERRA-2 ozone for decadal changes and trend studies.
Evaluation of the Ozone Fields in NASA’s MERRA-2 Reanalysis
Wargan, Krzysztof; Labow, Gordon; Frith, Stacey; Pawson, Steven; Livesey, Nathaniel; Partyka, Gary
2018-01-01
We describe and assess the quality of the assimilated ozone product from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2) produced at NASA’s Global Modeling and Assimilation Office (GMAO) spanning the time period from 1980 to present. MERRA-2 assimilates partial column ozone retrievals from a series of Solar Backscatter Ultraviolet (SBUV) radiometers on NASA and NOAA spacecraft between January 1980 and September 2004; starting in October 2004 retrieved ozone profiles from the Microwave Limb Sounder (MLS) and total column ozone from the Ozone Monitoring Instrument on NASA’s EOS Aura satellite are assimilated. We compare the MERRA-2 ozone with independent satellite and ozonesonde data focusing on the representation of the spatial and temporal variability of stratospheric and upper tropospheric ozone and on implications of the change in the observing system from SBUV to EOS Aura. The comparisons show agreement within 10 % (standard deviation of the difference) between MERRA-2 profiles and independent satellite data in most of the stratosphere. The agreement improves after 2004 when EOS Aura data are assimilated. The standard deviation of the differences between the lower stratospheric and upper tropospheric MERRA-2 ozone and ozonesondes is 11.2 % and 24.5 %, respectively, with correlations of 0.8 and above, indicative of a realistic representation of the near-tropopause ozone variability in MERRA-2. The agreement improves significantly in the EOS Aura period, however MERRA-2 is biased low in the upper troposphere with respect to the ozonesondes. Caution is recommended when using MERRA-2 ozone for decadal changes and trend studies. PMID:29527096
Increasing ICA512 autoantibody titers predict development of abnormal oral glucose tolerance tests.
Sanda, Srinath
2018-03-01
Determine if autoantibody titer magnitude and variability predict glucose abnormalities in subjects at risk for type 1 diabetes. Demographic information, longitudinal autoantibody titers, and oral glucose tolerance test (OGTT) data were obtained from the TrialNet Pathway to Prevention study. Subjects (first and second degree relatives of individuals with type 1 diabetes) with at least 2 diabetes autoantibodies were selected for analysis. Autoantibody titer means were calculated for each subject for the duration of study participation and the relationship between titer tertiles and glucose value tertiles from OGTTs (normal, impaired, and diabetes) was assessed with a proportional odds ordinal regression model. A matched pairs analysis was used to examine the relationship between changes in individual autoantibody titers and 120-minute glucose values. Titer variability was quantified using cumulative titer standard deviations. We studied 778 subjects recruited in the TrialNet Pathway to Prevention study between 2006 and 2014. Increased cumulative mean titer values for both ICA512 and GAD65 (estimated increase in proportional odds = 1.61, 95% CI = 1.39, 1.87, P < 1 × 10 -9 and 1.17, 95% CI = 1.03, 1.32, P = .016, respectively) were associated with peak 120-minute glucose values. While fluctuating titer levels were observed in some subjects, no significant relationship between titer standard deviation and glucose values was observed. ICA512 autoantibody titers associate with progressive abnormalities in glucose metabolism in subjects at risk for type 1 diabetes. Fluctuations in autoantibody titers do not correlate with lower rates of progression to clinical disease. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Lem, Annemieke J; de Kort, Sandra W K; de Ridder, Maria A J; Hokken-Koelega, Anita C S
2010-09-01
The criteria for starting growth hormone (GH), an approved treatment for short children born small for gestational age (SGA), differ between Europe and the USA. One European requirement for starting GH, a distance to target height (DTH) of > or =1 standard deviation score (SDS), is controversial. To investigate the influence of DTH on growth during GH treatment in short SGA children and to ascertain whether it is correct to exclude children with a DTH <1 SDS from GH. A large group of short prepubertal SGA children (baseline n = 446; 4 years GH n = 215). We analysed the prepubertal growth response during 4 years of GH. We investigated the influence of the continuous variable DTH SDS on growth response and a possible DTH SDS cut-off level below which point the growth response is insufficient. Height gain SDS during 4 years of GH showed a wide variation at every DTH SDS level. Multiple regression analyses demonstrated that, after correction for other significant variables, an additional DTH of 1 SDS resulted in 0.13 SDS more height gain during 4 years of GH. We found no significant differences in height gain below and above certain DTH SDS cut-off levels. DTH SDS had a weak positive effect on height gain during 4 years of GH, while several other determinants had much larger effects. We found no support for using any DTH cut-off level. Based on our data, excluding children with a DTH <1 SDS from GH treatment is not justified.
Labots, M Maaike; Laarakker, M C Marijke; Schetters, D Dustin; Arndt, S S Saskia; van Lith, H A Hein
2018-01-01
Guilloux et al. introduced: integrated behavioral z-scoring, a method for behavioral phenotyping of mice. Using this method multiple ethological variables can be combined to show an overall description of a certain behavioral dimension or motivational system. However, a problem may occur when the control group used for the calculation has a standard deviation of zero or when no control group is present to act as a reference group. In order to solve these problems, an improved procedure is suggested: taking the pooled data as reference. For this purpose a behavioral study with male mice from three inbred strains was carried out. The integrated behavioral z-scoring methodology was applied, thereby taking five different reference group options. The outcome regarding statistical significance and practical importance was compared. Significant effects and effect sizes were influenced by the choice of the reference group. In some cases it was impossible to use a certain population and condition, because one or more behavioral variables in question had a standard deviation of zero. Based on the improved method, male mice from the three inbred strains differed regarding activity and anxiety. Taking the method described by Guilloux et al. as basis, the present procedure improved the generalizability to all types of experimental designs in animal behavioral research. To solve the aforementioned problems and to avoid getting the diagnosis of data manipulation, the pooled data (combining the data from all experimental groups in a study) as reference option is recommended. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of the Ozone Fields in NASA's MERRA-2 Reanalysis.
Wargan, Krzysztof; Labow, Gordon; Frith, Stacey; Pawson, Steven; Livesey, Nathaniel; Partyka, Gary
2017-04-01
We describe and assess the quality of the assimilated ozone product from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2) produced at NASA's Global Modeling and Assimilation Office (GMAO) spanning the time period from 1980 to present. MERRA-2 assimilates partial column ozone retrievals from a series of Solar Backscatter Ultraviolet (SBUV) radiometers on NASA and NOAA spacecraft between January 1980 and September 2004; starting in October 2004 retrieved ozone profiles from the Microwave Limb Sounder (MLS) and total column ozone from the Ozone Monitoring Instrument on NASA's EOS Aura satellite are assimilated. We compare the MERRA-2 ozone with independent satellite and ozonesonde data focusing on the representation of the spatial and temporal variability of stratospheric and upper tropospheric ozone and on implications of the change in the observing system from SBUV to EOS Aura. The comparisons show agreement within 10 % (standard deviation of the difference) between MERRA-2 profiles and independent satellite data in most of the stratosphere. The agreement improves after 2004 when EOS Aura data are assimilated. The standard deviation of the differences between the lower stratospheric and upper tropospheric MERRA-2 ozone and ozonesondes is 11.2 % and 24.5 %, respectively, with correlations of 0.8 and above, indicative of a realistic representation of the near-tropopause ozone variability in MERRA-2. The agreement improves significantly in the EOS Aura period, however MERRA-2 is biased low in the upper troposphere with respect to the ozonesondes. Caution is recommended when using MERRA-2 ozone for decadal changes and trend studies.
Mars-GRAM Applications for Mars Science Laboratory Mission Site Selection Processes
NASA Technical Reports Server (NTRS)
Justh, Hilary; Justus, C. G.
2007-01-01
An overview is presented of the Mars-Global Reference Atmospheric Model (Mars-GRAM 2005) and its new features. One important new feature is the "auxiliary profile" option, whereby a simple input file is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Results are presented using auxiliary profiles produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) for three candidate Mars Science Laboratory (MSL) landing sites (Terby Crater, Melas Chasma, and Gale Crater). A global Thermal Emission Spectrometer (TES) database has also been generated for purposes of making 'Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude bins and 15 degree L(sub S) bins, for each of three Mars years of TES nadir data. Comparisons show reasonably good consistency between Mars-GRAM with low dust optical depth and both TES observed and mesoscale model simulated density at the three study sites. Mean winds differ by a more significant degree. Comparisons of mesoscale and TES standard deviations' with conventional Mars-GRAM values, show that Mars-GRAM density perturbations are somewhat conservative (larger than observed variability), while mesoscale-modeled wind variations are larger than Mars-GRAM model estimates. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.
Oerbeck, Beate; Sundet, Kjetil; Kase, Bengt F; Heyerdahl, Sonja
2003-10-01
To describe intellectual, motor, and school-associated outcome in young adults with early treated congenital hypothyroidism (CH) and to study the association between long-term outcome and CH variables acting at different points in time during early development (CH severity and early L-thyroxine treatment levels [0-6 years]). Neuropsychological tests were administered to all 49 subjects with CH identified during the first 3 years of the Norwegian neonatal screening program (1979-1981) at a mean age of 20 years and to 41 sibling control subjects (mean age: 21 years). The CH group attained significantly lower scores than control subjects on intellectual, motor, and school-associated tests (total IQ: 102.4 [standard deviation: 13] vs 111.4 [standard deviation: 13]). Twelve (24%) of the 49 CH subjects had not completed senior high school, in contrast to 6% of the control subjects. CH severity (pretreatment serum thyroxine [T4]) correlated primarily with motor tests, whereas early L-thyroxine treatment levels were related to verbal IQ and school-associated tests. In multiple regression analysis, initial L-thyroxine dose (beta = 0.32) and mean serum T4 level during the second year (beta = 0.48) predicted Verbal IQ, whereas mean serum T4 level during the second year (beta = 0.44) predicted Arithmetic. Long-term outcome revealed enduring cognitive and motor deficits in young adults with CH relative to control subjects. Verbal functions and Arithmetic were associated with L-thyroxine treatment variables, suggesting that more optimal treatment might be possible. Motor outcome was associated with CH severity, indicating a prenatal effect.
Narad, Megan; Garner, Annie A; Brassell, Anne A; Saxby, Dyani; Antonini, Tanya N; O'Brien, Kathleen M; Tamm, Leanne; Matthews, Gerald; Epstein, Jeffery N
2013-10-01
This study extends the literature regarding attention-deficit/hyperactivity disorder (ADHD)-related driving impairments to a newly licensed, adolescent population. To investigate the combined risks of adolescence, ADHD, and distracted driving (cell phone conversation and text messaging) on driving performance. Adolescents aged 16 to 17 years with (n = 28) and without (n = 33) ADHD engaged in a simulated drive under 3 conditions (no distraction, cell phone conversation, and texting). During each condition, one unexpected event (eg, another car suddenly merging into driver's lane) was introduced. Cell phone conversation, texting, and no distraction while driving. Self-report of driving history, average speed, standard deviation of speed, standard deviation of lateral position, and braking reaction time during driving simulation. Adolescents with ADHD reported fewer months of driving experience and a higher proportion of driving violations than control subjects. After controlling for months of driving history, adolescents with ADHD demonstrated more variability in speed and lane position than control subjects. There were no group differences for braking reaction time. Furthermore, texting negatively impacted the driving performance of all participants as evidenced by increased variability in speed and lane position. To our knowledge, this study is one of the first to investigate distracted driving in adolescents with ADHD and adds to a growing body of literature documenting that individuals with ADHD are at increased risk for negative driving outcomes. Furthermore, texting significantly impairs the driving performance of all adolescents and increases existing driving-related impairment in adolescents with ADHD, highlighting the need for education and enforcement of regulations against texting for this age group.
Culpepper, Steven Andrew
2016-06-01
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Chaput, Ludovic; Martinez-Sanz, Juan; Quiniou, Eric; Rigolet, Pascal; Saettel, Nicolas; Mouawad, Liliane
2016-01-01
In drug design, one may be confronted to the problem of finding hits for targets for which no small inhibiting molecules are known and only low-throughput experiments are available (like ITC or NMR studies), two common difficulties encountered in a typical academic setting. Using a virtual screening strategy like docking can alleviate some of the problems and save a considerable amount of time by selecting only top-ranking molecules, but only if the method is very efficient, i.e. when a good proportion of actives are found in the 1-10 % best ranked molecules. The use of several programs (in our study, Gold, Surflex, FlexX and Glide were considered) shows a divergence of the results, which presents a difficulty in guiding the experiments. To overcome this divergence and increase the yield of the virtual screening, we created the standard deviation consensus (SDC) and variable SDC (vSDC) methods, consisting of the intersection of molecule sets from several virtual screening programs, based on the standard deviations of their ranking distributions. SDC allowed us to find hits for two new protein targets by testing only 9 and 11 small molecules from a chemical library of circa 15,000 compounds. Furthermore, vSDC, when applied to the 102 proteins of the DUD-E benchmarking database, succeeded in finding more hits than any of the four isolated programs for 13-60 % of the targets. In addition, when only 10 molecules of each of the 102 chemical libraries were considered, vSDC performed better in the number of hits found, with an improvement of 6-24 % over the 10 best-ranked molecules given by the individual docking programs.Graphical abstractIn drug design, for a given target and a given chemical library, the results obtained with different virtual screening programs are divergent. So how to rationally guide the experimental tests, especially when only a few number of experiments can be made? The variable Standard Deviation Consensus (vSDC) method was developed to answer this issue. Left panel the vSDC principle consists of intersecting molecule sets, chosen on the basis of the standard deviations of their ranking distributions, obtained from various virtual screening programs. In this study Glide, Gold, FlexX and Surflex were used and tested on the 102 targets of the DUD-E database. Right panel Comparison of the average percentage of hits found with vSDC and each of the four programs, when only 10 molecules from each of the 102 chemical libraries of the DUD-E database were considered. On average, vSDC was capable of finding 38 % of the findable hits, against 34 % for Glide, 32 % for Gold, 16 % for FlexX and 14 % for Surflex, showing that with vSDC, it was possible to overcome the unpredictability of the virtual screening results and to improve them.
Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi
2015-01-01
Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
How well do the GCMs replicate the historical precipitation variability in the Colorado River Basin?
NASA Astrophysics Data System (ADS)
Guentchev, G.; Barsugli, J. J.; Eischeid, J.; Raff, D. A.; Brekke, L.
2009-12-01
Observed precipitation variability measures are compared to measures obtained using the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3) General Circulation Models (GCM) data from 36 model projections downscaled by Brekke at al. (2007) and 30 model projections downscaled by Jon Eischeid. Three groups of variability measures are considered in this historical period (1951-1999) comparison: a) basic variability measures, such as standard deviation, interdecadal standard deviation; b) exceedance probability values, i.e., 10% (extreme wet years) and 90% (extreme dry years) exceedance probability values of series of n-year running mean annual amounts, where n=1,12; 10% exceedance probability values of annual maximum monthly precipitation (extreme wet months); and c) runs variability measures, e.g., frequency of negative and positive runs of annual precipitation amounts, total number of the negative and positive runs. Two gridded precipitation data sets produced from observations are used: the Maurer et al. (2002) and the Daly et al. (1994) Precipitation Regression on Independent Slopes Method (PRISM) data sets. The data consist of monthly grid-point precipitation averaged on a United States Geological Survey (USGS) hydrological sub-region scale. The statistical significance of the obtained model minus observed measure differences is assessed using a block bootstrapping approach. The analyses were performed on annual, seasonal and monthly scale. The results indicate that the interdecadal standard deviation is underestimated, in general, on all time scales by the downscaled model data. The differences are statistically significant at a 0.05 significance level for several Lower Colorado Basin sub-regions on annual and seasonal scale, and for several sub-regions located mostly in the Upper Colorado River Basin for the months of March, June, July and November. Although the models simulate drier extreme wet years, wetter extreme dry years and drier extreme wet months for the Upper Colorado basin, the differences are mostly not-significant. Exceptions are the results about the extreme wet years for n=3 for sub-region White-Yampa, for n=6, 7, and 8 for sub-region Upper Colorado-Dolores, and about the extreme dry years for n=11 for sub-region Great Divide-Upper Green. None of the results for the sub-regions in the Lower Colorado Basin were significant. For most of the Upper Colorado sub-regions the models simulate significantly lower frequency of negative and positive 4-6 year runs, while for several sub-regions a significantly higher frequency of 2-year negative runs is evident in the model versus the Maurer data comparisons. The model projections versus the PRISM data comparison reveals similar results for the negative runs, while for the positive runs the results indicate that the models simulate higher frequency of the 2-6 year runs. The results for the Lower Colorado basin sub-regions are similar, in general, to these for the Upper Colorado sub-regions. The differences between the simulated and the observed total number of negative or positive runs were not significant for almost all of the sub-regions within the Colorado River Basin.
Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.
2014-01-01
Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894
Ngo, Manh-Dan; Aberman, Harold M; Hawes, Michael L; Choi, Bryan; Gertzman, Arthur A
2011-05-01
Incisional hernias commonly occur following abdominal wall surgery. Human acellular dermal matrices (HADM) are widely used in abdominal wall defect repair. Xenograft acellular dermal matrices, particularly those made from porcine tissues (PADM), have recently experienced increased usage. The purpose of this study was to compare the effectiveness of HADM and PADM in the repair of incisional abdominal wall hernias in a rabbit model. A review from earlier work of differences between human allograft acellular dermal matrices (HADM) and porcine xenograft acellular dermal matrices (PADM) demonstrated significant differences (P < 0.05) in mechanical properties: Tensile strength 15.7 MPa vs. 7.7 MPa for HADM and PADM, respectively. Cellular (fibroblast) infiltration was significantly greater for HADM vs. PADM (Armour). The HADM exhibited a more natural, less degraded collagen by electrophoresis as compared to PADM. The rabbit model surgically established an incisional hernia, which was repaired with one of the two acellular dermal matrices 3 weeks after the creation of the abdominal hernia. The animals were euthanized at 4 and 20 weeks and the wounds evaluated. Tissue ingrowth into the implant was significantly faster for the HADM as compared to PADM, 54 vs. 16% at 4 weeks, and 58 vs. 20% for HADM and PADM, respectively at 20 weeks. The original, induced hernia defect (6 cm(2)) was healed to a greater extent for HADM vs. PADM: 2.7 cm(2) unremodeled area for PADM vs. 1.0 cm² for HADM at 20 weeks. The inherent uniformity of tissue ingrowth and remodeling over time was very different for the HADM relative to the PADM. No differences were observed at the 4-week end point. However, the 20-week data exhibited a statistically different level of variability in the remodeling rate with the mean standard deviation of 0.96 for HADM as contrasted to a mean standard deviation of 2.69 for PADM. This was significant with P < 0.05 using a one tail F test for the inherent variability of the standard deviation. No significant differences between the PADM and HADM for adhesion, inflammation, fibrous tissue or neovascularization were noted.
Jiménez-Castellanos, Emilio; Orozco-Varo, Ana; Arroyo-Cruz, Gema; Iglesias-Linares, Alejandro
2016-06-01
Deviation from the facial midline and inclination of the dental midline or occlusal plane has been described as extremely influential in the layperson's perceptions of the overall esthetics of the smile. The purpose of this study was to determine the prevalence of deviation from the facial midline and inclination of the dental midline or occlusal plane in a selected sample. White participants from a European population (N=158; 93 women, 65 men) who met specific inclusion criteria were selected for the present study. Standardized 1:1 scale frontal photographs were made, and 3 variables of all participants were measured: midline deviation, midline inclination, and inclination of the occlusal plane. Software was used to measure midline deviation and inclination, taking the bipupillary line and the facial midline as references. Tests for normality of the sample were explored and descriptive statistics (means ±SD) were calculated. The chi-square test was used to evaluate differences in midline deviation, midline inclination, and occlusion plane (α=.05) RESULTS: Frequencies of midline deviation (>2 mm), midline inclination (>3.5 degrees), and occlusal plane inclination (>2 degrees) were 31.64% (mean 2.7±1.23 mm), 10.75% (mean 7.9 degrees ±3.57), and 25.9% (mean 9.07 degrees ±3.16), respectively. No statistically significant differences (P>.05) were found between sex and any of the esthetic smile values. The incidence of alterations with at least 1 altered parameter that affected smile esthetics was 51.9% in a population from southern Europe. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.
1997-01-01
Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.
Effect of signal jitter on the spectrum of rotor impulsive noise
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.
1987-01-01
The effect of randomness or jitter of the acoustic waveform on the spectrum of rotor impulsive noise is studied because of its importance for data interpretation. An acoustic waveform train is modelled representing rotor impulsive noise. The amplitude, shape, and period between occurrences of individual pulses are allowed to be randomized assuming normal probability distributions. Results, in terms of the standard deviations of the variable quantities, are given for the autospectrum as well as special processed spectra designed to separate harmonic and broadband rotor noise components. Consideration is given to the effect of accuracy in triggering or keying to a rotor one per revolution signal. An example is given showing the resultant spectral smearing at the high frequencies due to the pulse signal period variability.
Effect of signal jitter on the spectrum of rotor impulsive noise
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.
1988-01-01
The effect of randomness or jitter of the acoustic waveform on the spectrum of rotor impulsive noise is studied because of its importance for data interpretation. An acoustic waveform train is modeled representing rotor impulsive noise. The amplitude, shape, and period between occurrences of individual pulses are allowed to be randomized assuming normal probability distributions. Results, in terms of the standard deviations of the variable quantities, are given for the autospectrum as well as special processed spectra designed to separate harmonic and broadband rotor noise components. Consideration is given to the effect of accuracy in triggering or keying to a rotor one per revolution signal. An example is given showing the resultant spectral smearing at the high frequencies due to the pulse signal period variability.
Derkacz, Arkadiusz; Gawrys, Jakub; Gawrys, Karolina; Podgorski, Maciej; Magott-Derkacz, Agnieszka; Poreba, Rafał; Doroszko, Adrian
2018-06-01
The effect of electromagnetic field on cardiovascular system in the literature is defined in ambiguous way. The aim of this study was to evaluate the effect of electromagnetic field on the heart rate variability (HRV) during the examination with magnetic resonance. Forty-two patients underwent Holter ECG heart monitoring for 30 minutes twice: immediately before and after the examination with magnetic resonance imaging (MRI). HRV was analysed by assessing a few selected time and spectral parameters. Is has been shown that standard deviation of NN intervals (SDNN) and very low frequency rates increased, whereas the low frequency:high frequency parameter significantly decreased following the MRI examination. These results show that MRI may affect the HRV most likely by changing the sympathetic-parasympathetic balance.
The retest distribution of the visual field summary index mean deviation is close to normal.
Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz
2016-09-01
When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Marine reservoir age variability and water mass distribution in the Iceland Sea
NASA Astrophysics Data System (ADS)
Eiríksson, Jón; Larsen, Gudrún; Knudsen, Karen Luise; Heinemeier, Jan; Símonarson, Leifur A.
2004-11-01
Lateglacial and Holocene tephra markers from Icelandic source volcanoes have been identified in five sediment cores from the North Icelandic shelf and correlated with tephra layers in reference soil sections in North Iceland and the GRIP ice core. Land-sea correlation of tephra markers, that have been radiocarbon dated with terrestrial material or dated by documentary evidence, provides a tool for monitoring reservoir age variability in the region. Age models developed for the shelf sediments north of Iceland, based on offshore tephrochronology on one hand and on calibrated AMS 14C datings of marine molluscs on the other, display major deviations during the last 4500 years. The inferred temporal variability in the reservoir age of the regional water masses exceeds by far the variability expected from the marine model calculations. The observed reservoir ages are generally considerably higher, by up to 450 years, than the standard model ocean. It is postulated that the intervals with increased and variable marine reservoir age reflect incursions of Arctic water masses derived from the East Greenland Current to the Iceland Sea and the North Icelandic shelf.
Heart rate variability: Pre-deployment predictor of post-deployment PTSD symptoms
Pyne, Jeffrey M.; Constans, Joseph I.; Wiederhold, Mark D.; Gibson, Douglas P.; Kimbrell, Timothy; Kramer, Teresa L.; Pitcock, Jeffery A.; Han, Xiaotong; Williams, D. Keith; Chartrand, Don; Gevirtz, Richard N.; Spira, James; Wiederhold, Brenda K.; McCraty, Rollin; McCune, Thomas R.
2017-01-01
Heart rate variability is a physiological measure associated with autonomic nervous system activity. This study hypothesized that lower pre-deployment HRV would be associated with higher post-deployment post-traumatic stress disorder (PTSD) symptoms. Three-hundred-forty-three Army National Guard soldiers enrolled in the Warriors Achieving Resilience (WAR) study were analyzed. The primary outcome was PTSD symptom severity using the PTSD Checklist – Military version (PCL) measured at baseline, 3- and 12-month post-deployment. Heart rate variability predictor variables included: high frequency power (HF) and standard deviation of the normal cardiac inter-beat interval (SDNN). Generalized linear mixed models revealed that the pre-deployment PCL*ln(HF) interaction term was significant (p < 0.0001). Pre-deployment SDNN was not a significant predictor of post-deployment PCL. Covariates included age, pre-deployment PCL, race/ethnicity, marital status, tobacco use, childhood abuse, pre-deployment traumatic brain injury, and previous combat zone deployment. Pre-deployment heart rate variability predicts post-deployment PTSD symptoms in the context of higher pre-deployment PCL scores. PMID:27773678
The variable and chaotic nature of professional golf performance.
Stöckl, Michael; Lamb, Peter F
2018-05-01
In golf, unlike most other sports, individual performance is not the result of direct interactions between players. Instead decision-making and performance is influenced by numerous constraining factors affecting each shot. This study looked at the performance of PGA TOUR golfers in 2011 in terms of stability and variability on a shot-by-shot basis. Stability and variability were assessed using Recurrence Quantification Analysis (RQA) and standard deviation, respectively. About 10% of all shots comprised short stable phases of performance (3.7 ± 1.1 shots per stable phase). Stable phases tended to consist of shots of typical performance, rather than poor or exceptional shots; this finding was consistent for all shot categories. Overall, stability measures were not correlated with tournament performance. Variability across all shots was not related to tournament performance; however, variability in tee shots and short approach shots was higher than for other shot categories. Furthermore, tee shot variability was related to tournament standing: decreased variability was associated with better tournament ranking. The findings in this study showed that PGA TOUR golf performance is chaotic. Further research on amateur golf performance is required to determine whether the structure of amateur golf performance is universal.
NASA Astrophysics Data System (ADS)
Peel, M. C.; Srikanthan, R.; McMahon, T. A.; Karoly, D. J.
2015-04-01
Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between global climate models (GCMs) and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3) and phase 5 (CMIP5) data sets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to present a proof-of-concept approximation of within-GCM uncertainty for monthly precipitation and temperature projections and to assess the impact of within-GCM uncertainty on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. We adopt stochastic replicates of available GCM runs to approximate within-GCM uncertainty because large ensembles, hundreds of runs, for a given GCM and scenario are unavailable, other than the Climateprediction.net data set for the Hadley Centre GCM. To date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2015) sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from CMIP3 for use in this paper. Here we present within- and between-GCM uncertainty results in mean annual precipitation (MAP), mean annual temperature (MAT), mean annual runoff (MAR), the standard deviation of annual precipitation (SDP), standard deviation of runoff (SDR) and reservoir yield for five CMIP3 GCMs at 17 worldwide catchments. Based on 100 stochastic replicates of each GCM run at each catchment, within-GCM uncertainty was assessed in relative form as the standard deviation expressed as a percentage of the mean of the 100 replicate values of each variable. The average relative within-GCM uncertainties from the 17 catchments and 5 GCMs for 2015-2044 (A1B) were MAP 4.2%, SDP 14.2%, MAT 0.7%, MAR 10.1% and SDR 17.6%. The Gould-Dincer Gamma (G-DG) procedure was applied to each annual runoff time series for hypothetical reservoir capacities of 1 × MAR and 3 × MAR and the average uncertainties in reservoir yield due to within-GCM uncertainty from the 17 catchments and 5 GCMs were 25.1% (1 × MAR) and 11.9% (3 × MAR). Our approximation of within-GCM uncertainty is expected to be an underestimate due to not replicating the GCM trend. However, our results indicate that within-GCM uncertainty is important when interpreting climate change impact assessments. Approximately 95% of values of MAP, SDP, MAT, MAR, SDR and reservoir yield from 1 × MAR or 3 × MAR capacity reservoirs are expected to fall within twice their respective relative uncertainty (standard deviation/mean). Within-GCM uncertainty has significant implications for interpreting climate change impact assessments that report future changes within our range of uncertainty for a given variable - these projected changes may be due solely to within-GCM uncertainty. Since within-GCM variability is amplified from precipitation to runoff and then to reservoir yield, climate change impact assessments that do not take into account within-GCM uncertainty risk providing water resources management decision makers with a sense of certainty that is unjustified.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
NASA Astrophysics Data System (ADS)
Rodny, Marek; Nolz, Reinhard
2017-04-01
Evapotranspiration (ET) is a fundamental component of the hydrological cycle, but challenging to be quantified. Lysimeter facilities, for example, can be installed and operated to determine ET, but they are costly and represent only point measurements. Therefore, lysimeter data are traditionally used to develop, calibrate, and validate models that allow calculating reference evapotranspiration (ET0) based on meteorological data, which can be measured more easily. The standardized form of the well-known FAO Penman-Monteith equation (ASCE-EWRI) is recommended as a standard procedure for estimating ET0 and subsequently plant water requirements. Applied and validated under different climatic conditions, the Penman-Monteith equation is generally known to deliver proper results. On the other hand, several studies documented deviations between measured and calculated ET0 depending on environmental conditions. Potential reasons are, for example, differing or varying surface characteristics of the lysimeter and the location where the weather instruments are placed. Advection of sensible heat (transport of dry and hot air from surrounding areas) might be another reason for deviating ET-values. However, elaborating causal processes is complex and requires comprehensive data of high quality and specific analysis techniques. In order to assess influencing factors, we correlated differences between measured and calculated ET0 with pre-selected meteorological parameters and related system parameters. Basic data were hourly ET0-values from a weighing lysimeter (ET0_lys) with a surface area of 2.85 m2 (reference crop: frequently irrigated grass), weather data (air and soil temperature, relative humidity, air pressure, wind velocity, and solar radiation), and soil water content in different depths. ET0_ref was calculated in hourly time steps according to the standardized procedure after ASCE-EWRI (2005). Deviations between both datasets were calculated as ET0_lys-ET0_ref and separated into positive and negative values. For further interpretation, we calculated daily sums of these values. The respective daily difference (positive or negative) served as independent variable (x) in linear correlation with a selected parameter as dependent variable (y). Quality of correlation was evaluated by means of coefficients of determination (R2). When ET0_lys > ET0_ref, the differences were only weakly correlated with the selected parameters. Hence, the evaluation of the causal processes leading to underestimation of measured hourly ET0 seems to require a more rigorous approach. On the other hand, when ET0_lys < ET0_ref, the differences correlated considerably with the meteorological parameters and related system parameters. Interpreting the particular correlations in detail indicated different (or varying) surface characteristics between the irrigated lysimeter and the nearby (non-irrigated) meteorological station.
The truly remarkable universality of half a standard deviation: confirmation through another look.
Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W
2004-10-01
In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Effect of multizone refractive multifocal contact lenses on standard automated perimetry.
Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa
2012-09-01
The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.
Determination of antibacterial flomoxef in serum by capillary electrophoresis.
Kitahashi, Toshihiro; Furuta, Itaru
2003-04-01
A determination method of flomoxef (FMOX) concentration in serum by capillary electrophoresis is developed. Serum samples are extracted with acetonitrile. After pretreatment, they are separated in a fused-silica capillary tube with a 25 mM borate buffer (pH 10.0) as a running buffer that contains 50mM sodium dodecyl sulfate. The FMOX and acetaminophen (internal standard) are detected by UV absorbance at 200 nm. Linearity (0-200 mg/L) is good, and the minimum limit of detection is 1.0 mg/L (S/N = 3). The relative standard deviations of intra- and interassay variability are 1.60-4.78% and 2.10-3.31%, respectively, and the recovery rate is 84-98%. This method can be used for determination of FMOX concentration in serum.
Modest weight loss in moderately overweight postmenopausal women improves heart rate variability.
Mouridsen, Mette Rauhe; Bendsen, Nathalie Tommerup; Astrup, Arne; Haugaard, Steen Bendix; Binici, Zeynep; Sajadieh, Ahmad
2013-08-01
To evaluate the effects of weight loss on heart rate (HR) and heart rate variability (HRV) parameters in overweight postmenopausal women. Forty-nine overweight postmenopausal women with an average body mass index of 28.8 ± 1.9 kg/m(2) underwent a 12-week dietary weight-loss programme. Accepted variables for characterization of HRV were analysed before and after the weight loss by 24-h ambulatory ECG monitoring; mean and standard deviation for the time between normal-to-normal complexes (MeanNN and SDNN, respectively), and the mean of standard deviations of normal-to-normal intervals for each 5-min period (SDNNindex). Baseline body fat mass (FM%) and changes in body composition was determined by dual X-ray absorptiometry. Before and after the weight-loss period, total abdominal fat, intra-abdominal fat (IAAT), and subcutaneous abdominal fat (SCAT) were measured by single-slice MRI at L3. The weight loss of 3.9 ± 2.0 kg was accompanied by an improvement of HRV. SDNN increased by 9.2% (p = 0.003) and SDNNindex increased by 11.4% (p = 0.0003). MeanNN increased by 2.4%, reflecting a decrease in mean heart rate from 74.1 to 72.3 beats/min (p = 0.033). Systolic blood pressure (SBP) decreased by 2.7%, total cholesterol by 5.1% and high-sensitivity C-reactive protein (hsCRP) by 15.8% (p = 0.002). Improvements in SDNN and cholesterol were correlated with weight loss (r = -0.329, p = 0.024 and r = 0.327, p = 0.020, respectively) but changes in HR, SBP, and hsCRP were not. IAAT and the IAAT/SCAT-ratio were found to be negatively associated with HRV parameters but changes in body composition were not associated with changes in HRV. The observed improvement of HRV seems to be facilitated by weight loss. IAAT and the IAAT/SCAT ratio were found to be associated with low HRV.
Chuang, Shin-Shin; Wu, Kung-Tai; Lin, Chen-Yang; Lee, Steven; Chen, Gau-Yang; Kuo, Cheng-Deng
2014-08-01
The Poincaré plot of RR intervals (RRI) is obtained by plotting RRIn+1 against RRIn. The Pearson correlation coefficient (ρRRI), slope (SRRI), Y-intercept (YRRI), standard deviation of instantaneous beat-to-beat RRI variability (SD1RR), and standard deviation of continuous long-term RRI variability (SD2RR) can be defined to characterize the plot. Similarly, the Poincaré plot of autocorrelation function (ACF) of RRI can be obtained by plotting ACFk+1 against ACFk. The corresponding Pearson correlation coefficient (ρACF), slope (SACF), Y-intercept (YACF), SD1ACF, and SD2ACF can be defined similarly to characterize the plot. By comparing the indices of Poincaré plots of RRI and ACF between patients with acute myocardial infarction (AMI) and patients with patent coronary artery (PCA), we found that the ρACF and SACF were significantly larger, whereas the RMSSDACF/SDACF and SD1ACF/SD2ACF were significantly smaller in AMI patients. The ρACF and SACF correlated significantly and negatively with normalized high-frequency power (nHFP), and significantly and positively with normalized very low-frequency power (nVLFP) of heart rate variability in both groups of patients. On the contrary, the RMSSDACF/SDACF and SD1ACF/SD2ACF correlated significantly and positively with nHFP, and significantly and negatively with nVLFP and low-/high-frequency power ratio (LHR) in both groups of patients. We concluded that the ρACF, SACF, RMSSDACF/SDACF, and SD1ACF/SD2ACF, among many other indices of ACF Poincaré plot, can be used to differentiate between patients with AMI and patients with PCA, and that the increase in ρACF and SACF and the decrease in RMSSDACF/SDACF and SD1ACF/SD2ACF suggest an increased sympathetic and decreased vagal modulations in both groups of patients.
Stoliker, Deborah L.; Liu, Chongxuan; Kent, Douglas B.; Zachara, John M.
2013-01-01
Rates of U(VI) release from individual dry-sieved size fractions of a field-aggregated, field-contaminated composite sediment from the seasonally saturated lower vadose zone of the Hanford 300-Area were examined in flow-through reactors to maintain quasi-constant chemical conditions. The principal source of variability in equilibrium U(VI) adsorption properties of the various size fractions was the impact of variable chemistry on adsorption. This source of variability was represented using surface complexation models (SCMs) with different stoichiometric coefficients with respect to hydrogen ion and carbonate concentrations for the different size fractions. A reactive transport model incorporating equilibrium expressions for cation exchange and calcite dissolution, along with rate expressions for aerobic respiration and silica dissolution, described the temporal evolution of solute concentrations observed during the flow-through reactor experiments. Kinetic U(VI) desorption was well described using a multirate SCM with an assumed lognormal distribution for the mass-transfer rate coefficients. The estimated mean and standard deviation of the rate coefficients were the same for all <2 mm size fractions but differed for the 2–8 mm size fraction. Micropore volumes, assessed using t-plots to analyze N2 desorption data, were also the same for all dry-sieved <2 mm size fractions, indicating a link between micropore volumes and mass-transfer rate properties. Pore volumes for dry-sieved size fractions exceeded values for the corresponding wet-sieved fractions. We hypothesize that repeated field wetting and drying cycles lead to the formation of aggregates and/or coatings containing (micro)pore networks which provided an additional mass-transfer resistance over that associated with individual particles. The 2–8 mm fraction exhibited a larger average and standard deviation in the distribution of mass-transfer rate coefficients, possibly caused by the abundance of microporous basaltic rock fragments.
Stenzel, O; Wilbrandt, S; Wolf, J; Schürmann, M; Kaiser, N; Ristau, D; Ehlers, H; Carstens, F; Schippel, S; Mechold, L; Rauhut, R; Kennedy, M; Bischoff, M; Nowitzki, T; Zöller, A; Hagedorn, H; Reus, H; Hegemann, T; Starke, K; Harhausen, J; Foest, R; Schumacher, J
2017-02-01
Random effects in the repeatability of refractive index and absorption edge position of tantalum pentoxide layers prepared by plasma-ion-assisted electron-beam evaporation, ion beam sputtering, and magnetron sputtering are investigated and quantified. Standard deviations in refractive index between 4*10-4 and 4*10-3 have been obtained. Here, lowest standard deviations in refractive index close to our detection threshold could be achieved by both ion beam sputtering and plasma-ion-assisted deposition. In relation to the corresponding mean values, the standard deviations in band-edge position and refractive index are of similar order.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
Exposure to televised alcohol ads and subsequent adolescent alcohol use.
Stacy, Alan W; Zogg, Jennifer B; Unger, Jennifer B; Dent, Clyde W
2004-01-01
To assess the impact of televised alcohol commercials on adolescents' alcohol use. Adolescents completed questionnaires about alcohol commercials and alcohol use in a prospective study. A one standard deviation increase in viewing television programs containing alcohol commercials in seventh grade was associated with an excess risk of beer use (44%), wine/liquor use (34%), and 3-drink episodes (26%) in eighth grade. The strength of associations varied across exposure measures and was most consistent for beer. Although replication is warranted, results showed that exposure was associated with an increased risk of subsequent beer consumption and possibly other consumption variables.
feets: feATURE eXTRACTOR for tIME sERIES
NASA Astrophysics Data System (ADS)
Cabral, Juan; Sanchez, Bruno; Ramos, Felipe; Gurovich, Sebastián; Granitto, Pablo; VanderPlas, Jake
2018-06-01
feets characterizes and analyzes light-curves from astronomical photometric databases for modelling, classification, data cleaning, outlier detection and data analysis. It uses machine learning algorithms to determine the numerical descriptors that characterize and distinguish the different variability classes of light-curves; these range from basic statistical measures such as the mean or standard deviation to complex time-series characteristics such as the autocorrelation function. The library is not restricted to the astronomical field and could also be applied to any kind of time series. This project is a derivative work of FATS (ascl:1711.017).
Irlenbusch, Ulrich; Berth, Alexander; Blatter, Georges; Zenz, Peter
2012-03-01
Most anthropometric data on the proximal humerus has been obtained from deceased healthy individuals with no deformities. Endoprostheses are implanted for primary and secondary osteoarthritis, rheumatoid arthritis,humeral-head necrosis, fracture sequelae and other humeral-head deformities. This indicates that pathologicoanatomical variability may be greater than previously assumed. We therefore investigated a group of patients with typical shoulder replacement diagnoses, including posttraumatic and rheumatic deformities. One hundred and twenty-two patients with a double eccentrically adjustable shaft endoprosthesis served as a specific dimension gauge to determine in vivo the individual humeral-head rotation centres from the position of the adjustable prosthesis taper and the eccentric head. All prosthesis heads were positioned eccentrically.The entire adjustment range of the prosthesis of 12 mm medial/lateral and 6 mm dorsal/ventral was required. Mean values for effective offset were 5.84 mm mediolaterally[standard deviation (SD) 1.95, minimum +2, maximum +11]and 1.71 mm anteroposteriorly (SD 1.71, minimum −3,maximum 3 mm), averaging 5.16 mm (SD 1.76, minimum +2,maximum + 10). The posterior offset averaged 1.85 mm(SD 1.85, minimum −1, maximum + 6 mm). In summary, variability of the combined medial and dorsal offset of the humeral-head rotational centre determined in patients with typical underlying diagnoses in shoulder replacement was not greater than that recorded in the literature for healthy deceased patients.The range of deviation is substantial and shows the need for an adjustable prosthetic system.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Introducing the Mean Absolute Deviation "Effect" Size
ERIC Educational Resources Information Center
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Loftus, Loni; Marks, Kelly; Jones-McVey, Rosie; Gonzales, Jose L; Fowler, Veronica L
2016-09-09
Effective training of horses relies on the trainer's awareness of learning theory and equine ethology, and should be undertaken with skill and time. Some trainers, such as Monty Roberts, share their methods through the medium of public demonstrations. This paper describes the opportunistic analysis of beat-to-beat (RR) intervals and heart rate variability (HRV) of ten horses being used in Monty Roberts' public demonstrations within the United Kingdom. RR and HRV was measured in the stable before training and during training. The HRV variables standard deviation of the RR interval (SDRR), root mean square of successive RR differences (RMSSD), geometric means standard deviation 1 (SD1) and 2 (SD2), along with the low and high frequency ratio (LF/HF ratio) were calculated. The minimum, average and maximum RR intervals were significantly lower in training (indicative of an increase in heart rate as measured in beats-per-minute) than in the stable ( p = 0.0006; p = 0.01; p = 0.03). SDRR, RMSSD, SD1, SD2 and the LF/HF ratio were all significantly lower in training than in the stable ( p = 0.001; p = 0.049; p = 0.049; p = 0.001; p = 0.01). When comparing the HR and HRV of horses during Join-up (®) to overall training, there were no significant differences in any variable with the exception of maximum RR which was significantly lower ( p = 0.007) during Join-up (®) , indicative of short increases in physical exertion (canter) associated with this training exercise. In conclusion, training of horses during public demonstrations is a low-moderate physiological, rather than psychological stressor for horses. The physiological stress responses observed within this study were comparable or less to those previously reported in the literature for horses being trained outside of public audience events. Furthermore, there is no evidence that the use of Join-up (®) alters HR and HRV in a way to suggest that this training method negatively affects the psychological welfare of horses.
Hopper, John L.
2015-01-01
How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C
2002-08-01
To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.
Morioka, Noriko; Tomio, Jun; Seto, Toshikazu; Kobayashi, Yasuki
2017-01-01
In Japan, the revision of the fee schedules in 2006 introduced a new category of general care ward for more advanced care, with a higher staffing standard, a patient-to-nurse ratio of 7:1. Previous studies have suggested that these changes worsened inequalities in the geographic distribution of nurses, but there have been few quantitative studies evaluating this effect. This study aimed to investigate the association between the distribution of 7:1 beds and the geographic distribution of hospital nursing staffs. We conducted a secondary data analysis of hospital reimbursement reports in 2012 in Japan. The study units were secondary medical areas (SMAs) in Japan, which are roughly comparable to hospital service areas in the United States. The outcome variable was the nurse density per 100,000 population in each SMA. The 7:1 bed density per 100,000 population was the main independent variable. To investigate the association between the nurse density and 7:1 bed density, adjusting for other variables, we applied a multiple linear regression model, with nurse density as an outcome variable, and the bed densities by functional category of inpatient ward as independent variables, adding other variables related to socio-economic status and nurse workforce. To investigate whether 7:1 bed density made the largest contribution to the nurse density, compared to other bed densities, we estimated the standardized regression coefficients. There were 344 SMAs in the study period, of which 343 were used because of data availability. There were approximately 553,600 full time equivalent nurses working in inpatient wards in hospitals. The mean (standard deviation) of the full time equivalent nurse density was 426.4 (147.5) and for 7:1 bed density, the figures were 271.9 (185.9). The 7:1 bed density ranged from 0.0 to 1,295.5. After adjusting for the possible confounders, there were more hospital nurses in the areas with higher densities of 7:1 beds (standardized regression coefficient 0.62, 95% confidence interval 0.56-0.68). We found that the 7:1 nurse staffing standard made the largest contribution to the geographic distribution of hospital nurses, adjusted for socio-economic status and nurse workforce-related factors.
Automated EEG sleep staging in the term-age baby using a generative modelling approach.
Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten
2018-06-01
We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording's feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen's kappa agreement calculated between the estimates and clinicians' visual labels. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.
Shariat-Mohaymany, Afshin; Tavakoli-Kashani, Ali; Nosrati, Hadi; Ranjbari, Andisheh
2011-12-01
To identify the significant factors that influence head-on conflicts resulting from dangerous overtaking maneuvers on 2-lane rural roads in Iran. A traffic conflict technique was applied to 12 two-lane rural roads in order to investigate the potential situations for accidents to occur and thus to identify the geometric and traffic factors affecting traffic conflicts. Traffic data were collected via the inductive loop detectors installed on these roads, and geometric characteristics were obtained through field observations. Two groups of data were then analyzed independently by Pearson's chi-square test to evaluate their relationship to traffic conflicts. The independent variables were percentage of time spent following (PTSF), percentage of heavy vehicles, directional distribution of traffic (DDT), mean speed, speed standard deviation, section type, road width, longitudinal slope, holiday or workday, and lighting condition. It was indicated that increasing the PTSF, decreasing the percentage of heavy vehicles, increasing the mean speed (up to 75 km/h), increasing DDT in the range of 0 to 60 percent, and decreasing the standard deviation of speed significantly increased the occurrence of traffic conflicts. It was also revealed that traffic conflicts occur more frequently on curve sections and on workdays. The variables road width, slope, and lighting condition were found to have a minor effect on conflict occurrence. To reduce the number of head-on conflicts on the aforementioned roads, some remedial measures are suggested, such as not constructing long "No Passing" zones and constructing passing lanes where necessary; keeping road width at the standard value; constructing roads with horizontal curves and a high radius and using appropriate road markings and overtaking-forbidden signs where it is impossible to modify the radius; providing enough light and installing caution signs/devices on the roads; and intensifying police control and supervision on workdays, especially in peak hours.
Automated EEG sleep staging in the term-age baby using a generative modelling approach
NASA Astrophysics Data System (ADS)
Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten
2018-06-01
Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
[Anthropometry of elderly people living in geriatric institutions, Brazil].
de Menezes, Tarciana Nobre; de Fátima Nunes Marucci, Maria
2005-04-01
To provide anthropometric and body composition information on elderly people living in geriatric institutions. Three-hundred and five elderly people, of both sexes, living in six geriatric institutions in Fortaleza were assessed. The following anthropometric variables were studied: weight, height, body mass index, mid-arm circumference, triceps skinfold thickness, arm muscle circumference, and corrected arm-muscle area. The body mass index was calculated as weight divided by the square of the height (m2). The arm muscle circumference and corrected arm-muscle area were calculated using specific equations. The results are presented as means, standard deviations and percentiles (5th, 10th, 25th, 50th, 75th, 90th and 95th). The analyses included Student t-test to detect differences in mean values of the variables between both sexes. Age impact was investigated by ANOVA. In all variables, mean values in men were higher than those in women, except for triceps skinfold thickness . The mean difference of the variables body mass index and mid-arm circumference for both sexes were not statistically significant (p>0.05). Age has significantly contributed to reducing the variables' values. This means that specific reference standards are needed for elderly people. Despite being institutionalized, there was seen a trend of decreasing anthropometric values in the study population similar to that found in other studies of elderly people but with different values. Thereby, such values could be useful in the nutritional assessment of institutionalized elderly people.
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Estimating maize water stress by standard deviation of canopy temperature in thermal imagery
USDA-ARS?s Scientific Manuscript database
A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...
NASA Astrophysics Data System (ADS)
Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair
2018-04-01
The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.
Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard
2017-01-01
The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Weir, Christopher J.; Rubio, Noah; Rabinovich, Roberto; Pinnock, Hilary; Hanley, Janet; McCloughan, Lucy; Drost, Ellen M.; Mantoani, Leandro C.; MacNee, William; McKinstry, Brian
2016-01-01
Introduction The Bland-Altman limits of agreement method is widely used to assess how well the measurements produced by two raters, devices or systems agree with each other. However, mixed effects versions of the method which take into account multiple sources of variability are less well described in the literature. We address the practical challenges of applying mixed effects limits of agreement to the comparison of several devices to measure respiratory rate in patients with chronic obstructive pulmonary disease (COPD). Methods Respiratory rate was measured in 21 people with a range of severity of COPD. Participants were asked to perform eleven different activities representative of daily life during a laboratory-based standardised protocol of 57 minutes. A mixed effects limits of agreement method was used to assess the agreement of five commercially available monitors (Camera, Photoplethysmography (PPG), Impedance, Accelerometer, and Chest-band) with the current gold standard device for measuring respiratory rate. Results Results produced using mixed effects limits of agreement were compared to results from a fixed effects method based on analysis of variance (ANOVA) and were found to be similar. The Accelerometer and Chest-band devices produced the narrowest limits of agreement (-8.63 to 4.27 and -9.99 to 6.80 respectively) with mean bias -2.18 and -1.60 breaths per minute. These devices also had the lowest within-participant and overall standard deviations (3.23 and 3.29 for Accelerometer and 4.17 and 4.28 for Chest-band respectively). Conclusions The mixed effects limits of agreement analysis enabled us to answer the question of which devices showed the strongest agreement with the gold standard device with respect to measuring respiratory rates. In particular, the estimated within-participant and overall standard deviations of the differences, which are easily obtainable from the mixed effects model results, gave a clear indication that the Accelerometer and Chest-band devices performed best. PMID:27973556
YALE NATURAL RADIOCARBON MEASUREMENTS. PART VI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuiver, M.; Deevey, E.S.
1961-01-01
Most of the measurements made since publication of Yale V are included; some measurements, such as a series collected in Greenland, are withneld pending additional information or field work that will make better interpretations possible. In addition to radiocarbon dates of geologic and/or archaeologic interest, recent assays are given of C/sup 14/ in lake waters and other lacustrine materials, now normalized for C/sup 13/ content. The newly accepted convention is followed in expressing normalized C/sup 14/ values as DELTA = delta C/sup 14/ (2 delta C/sup 13/ + 50)STAl + ( delta C/sup 14//1000)! where DELTA is the per milmore » deviation of the C/sup 14/ if the sample from any contemporary standard (whether organic or a carbonate) after correction of sample and/or standard for real age, for the Suess effect, for normal isotopic fractionation, and for deviations of C/sup 14/ content of the age- and pollution- corrected l9th-century wood standard from that of 95% of the NBS oxalic acid standard; delta C/sup 14/ is the measured deviation from 95% of the NBS standard, and delta C/sup 13/ is the deviation from the NBS limestone standard, both in per mil. These assays are variously affected by artificial C/sup 14/ resulting from nuclear tests. (auth)« less
Increased beat-to-beat T-wave variability in myocardial infarction patients.
Hasan, Muhammad A; Abbott, Derek; Baumert, Mathias; Krishnan, Sridhar
2018-03-28
The purpose of this study was to investigate the beat-to-beat variability of T-waves (TWV) and to assess the diagnostic capabilities of T-wave-based features for myocardial infarction (MI). A total of 148 recordings of standard 12-lead electrocardiograms (ECGs) from 79 MI patients (22 females, mean age 63±12 years; 57 males, mean age 57±10 years) and 69 recordings from healthy subjects (HS) (17 females, 42±18 years; 52 males, 40±13 years) were studied. For the quantification of beat-to-beat QT intervals in ECG signal, a template-matching algorithm was applied. To study the T-waves beat-to-beat, we measured the angle between T-wave max and T-wave end with respect to Q-wave (∠α) and T-wave amplitudes. We computed the standard deviation (SD) of beat-to-beat T-wave features and QT intervals as markers of variability in T-waves and QT intervals, respectively, for both patients and HS. Moreover, we investigated the differences in the studied features based on gender and age for both groups. Significantly increased TWV and QT interval variability (QTV) were found in MI patients compared to HS (p<0.05). No significant differences were observed based on gender or age. TWV may have some diagnostic attributes that may facilitate identifying patients with MI. In addition, the proposed beat-to-beat angle variability was found to be independent of heart rate variations. Moreover, the proposed feature seems to have higher sensitivity than previously reported feature (QT interval and T-wave amplitude) variability for identifying patients with MI.
Searching for faint AGN in the CDFS: an X-ray (Chandra) vs optical variability (HST) comparison.
NASA Astrophysics Data System (ADS)
Georgantopoulos, I.; Pouliasis, E.; Bonanos, A.; Sokolovsky, K.; Yang, M.; Hatzidimitriou, D.; Bellas, I.; Gavras, P.; Spetsieri, Z.
2017-10-01
X-ray surveys are believed to be the most efficient way to detect AGN. Recently though, optical variability studies are claimed to probe even fainter AGN. We are presenting results from an HST study aimed to identify Active Galactic Nuclei (AGN) through optical variability selection in the CDFS.. This work is part of the 'Hubble Catalogue of Variables'project of ESA that aims to identify variable sources in the Hubble Source Catalogue.' In particular, we used Hubble Space Telescope (HST) z-band images taken over 5 epochs and performed aperture photometry to derive the lightcurves of the sources. Two statistical methods (standard deviation & interquartile range) resulting in a final sample of 175 variable AGN candidates, having removed the artifacts by visual inspection and known stars and supernovae. The fact that the majority of the sources are extended and variable indicates AGN activity. We compare the efficiency of the method by comparing with the 7Ms Chandra detections. Our work shows that the optical variability probes AGN at comparable redshifts but at deeper optical magnitudes. Our candidate AGN (non detected in X-rays) have luminosities of L_x<6×10^{40} erg/sec at z˜0.7 suggesting that these are associated with low luminosity Seyferts and LINERS.
Analysis of proximal and distal muscle activity during handwriting tasks.
Naider-Steinhart, Shoshana; Katz-Leurer, Michal
2007-01-01
In this study we sought to describe upper-extremity proximal and distal muscle activity in typically developing children during a handwriting task and to explore the relationship between muscle activity and speed and quality of writing. We evaluated 35 third- and fourth-grade Israeli children using the Alef-Alef Ktav Yad Hebrew Handwriting Test. Simultaneously, we recorded the participants' upper trapezius and thumb muscle activity by surface electromyography. Using the coefficient of variation (standard deviation divided by mean amplitude) as a measure of variability within each muscle, we analyzed differences in muscle activity variability within and between muscles. The proximal muscle displayed significantly less variability than the distal muscles. Decreased variability in proximal muscle activity was associated with decreased variability in distal muscle activity, and decreased variability in the distal muscles was significantly associated with faster speed of writing. The lower amount of variability exhibited in the proximal muscle compared with the distal muscles seems to indicate that the proximal muscle functions as a stabilizer during a handwriting task. In addition, decreased variability in both proximal and distal muscle activity appears to be more economical and is related to faster writing speed. Knowledge of the type of proximal and distal muscle activity used during handwriting can help occupational therapists plan treatment for children with handwriting disabilities.
Ito, Masanori; Kado, Naoki; Suzuki, Toshiaki; Ando, Hiroshi
2013-01-01
[Purpose] The purpose of this study was to investigate the influence of external pacing with periodic auditory stimuli on the control of periodic movement. [Subjects and Methods] Eighteen healthy subjects performed self-paced, synchronization-continuation, and syncopation-continuation tapping. Inter-onset intervals were 1,000, 2,000 and 5,000 ms. The variability of inter-tap intervals was compared between the different pacing conditions and between self-paced tapping and each continuation phase. [Results] There were no significant differences in the mean and standard deviation of the inter-tap interval between pacing conditions. For the 1,000 and 5,000 ms tasks, there were significant differences in the mean inter-tap interval following auditory pacing compared with self-pacing. For the 2,000 ms syncopation condition and 5,000 ms task, there were significant differences from self-pacing in the standard deviation of the inter-tap interval following auditory pacing. [Conclusion] These results suggest that the accuracy of periodic movement with intervals of 1,000 and 5,000 ms can be improved by the use of auditory pacing. However, the consistency of periodic movement is mainly dependent on the inherent skill of the individual; thus, improvement of consistency based on pacing is unlikely. PMID:24259932
Phonological Awareness and Print Knowledge of Preschool Children with Cochlear Implants
Ambrose, Sophie E.; Fey, Marc E.; Eisenberg, Laurie S.
2012-01-01
Purpose To determine whether preschool-age children with cochlear implants have age-appropriate phonological awareness and print knowledge and to examine the relationships of these skills with related speech and language abilities. Method 24 children with cochlear implants (CIs) and 23 peers with normal hearing (NH), ages 36 to 60 months, participated. Children’s print knowledge, phonological awareness, language, speech production, and speech perception abilities were assessed. Results For phonological awareness, the CI group’s mean score fell within 1 standard deviation of the TOPEL’s normative sample mean but was more than 1 standard deviation below our NH group mean. The CI group’s performance did not differ significantly from that of the NH group for print knowledge. For the CI group, phonological awareness and print knowledge were significantly correlated with language, speech production, and speech perception. Together, these predictor variables accounted for 34% of variance in the CI group’s phonological awareness but no significant variance in their print knowledge. Conclusions Children with CIs have the potential to develop age-appropriate early literacy skills by preschool-age but are likely to lag behind their NH peers in phonological awareness. Intervention programs serving these children should target these skills with instruction and by facilitating speech and language development. PMID:22223887
A stochastic visco-hyperelastic model of human placenta tissue for finite element crash simulations.
Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Rupp, Jonathan D; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W
2011-03-01
Placental abruption is the most common cause of fetal deaths in motor-vehicle crashes, but studies on the mechanical properties of human placenta are rare. This study presents a new method of developing a stochastic visco-hyperelastic material model of human placenta tissue using a combination of uniaxial tensile testing, specimen-specific finite element (FE) modeling, and stochastic optimization techniques. In our previous study, uniaxial tensile tests of 21 placenta specimens have been performed using a strain rate of 12/s. In this study, additional uniaxial tensile tests were performed using strain rates of 1/s and 0.1/s on 25 placenta specimens. Response corridors for the three loading rates were developed based on the normalized data achieved by test reconstructions of each specimen using specimen-specific FE models. Material parameters of a visco-hyperelastic model and their associated standard deviations were tuned to match both the means and standard deviations of all three response corridors using a stochastic optimization method. The results show a very good agreement between the tested and simulated response corridors, indicating that stochastic analysis can improve estimation of variability in material model parameters. The proposed method can be applied to develop stochastic material models of other biological soft tissues.
Alves, Vera; Gonçalves, João; Conceição, Carlota; Teixeira, Helena M; Câmara, José S
2015-08-21
A powerful and sensitive method, by microextraction packed sorbent (MEPS), and ultra-high performance liquid chromatography (UHPLC) with a photodiode array (PDA) detection, is described for the determination of fluoxetine, clomipramine and their active metabolites in human urine samples. The MEPS variables, such as sample volume, pH, number of extraction cycles (draw-eject), and desorption conditions (solvent and solvent volume of elution) were optimized. The analysis were carried out using small sample volumes (500μL) and in a short time period (5min for the entire sample preparation step). Good linearity was obtained for all antidepressants with the correlation coefficients (R(2)) above 0.9965. The limits of detection (LOD) ranged from 0.068 to 0.087μgmL(-1). The recoveries were from 93% to 98%, with relative standard deviations less than 6%. The inter-day precision, expressed as the relative standard deviation, varied between 3.8% and 8.5% while the intra-day precision between 3.0% and 7.1%. In order to evaluate the proposed method for clinical use, the MEPS/UHPLC-PDA method was applied to analysis of urine samples from depressed patients. Copyright © 2015 Elsevier B.V. All rights reserved.
Szpylka, John; DeVries, Jonathan W.; Bhandari, S.; Bui, M.H.; Ji, D.; Konings, E.; Lewis, R.; Maas, P.; Parish, H.; Post, B.; Schierle, J.; Sullivan, D.; Taylor, A.; Wang, J.; Ware, G.; Woollard, D.; Wu, T.
2008-01-01
Twelve laboratories representing 4 countries participated in an interlaboratory study conducted to determine all-trans-β-carotene and total β-carotene in dietary supplements and raw materials. Thirteen samples were sent as blind duplicates to the collaborators. Results obtained from 11 laboratories are reported. For products composed as softgels and tablets that were analyzed for total β-carotene, the reproducibility relative standard deviation (RSDR) ranged from 3.35 to 23.09% and the HorRat values ranged from 1.06 to 3.72. For these products analyzed for trans β-carotene, the reproducibility relative standard deviation (RSDR) ranged from 4.28 to 22.76% and the HorRat values ranged from 0.92 to 3.37. The RSDr and HorRat values in the analysis of a beadlet raw material were substantial and it is believed that the variability within the material itself introduced significant variation in subsampling. The method uses high pressure liquid chromatography (LC) in the reversed-phase mode with visible light absorbance for detection and quantitation. If high levels of α-carotenes are present, a second LC system is used for additional separation and quantitation of the carotene species. It is recommended that the method be adopted as an AOAC Official Method. PMID:16385976
Hagen, Kristine Amlund; Ogden, Terje
2017-04-01
This non-randomised study examined a set of predictive factors of changes in child behaviour following parent management training (PMTO). Families of 331 Norwegian girls (26%) and boys with clinic-level conduct problems participated. The children ranged in age from 3 to 12 years (M age = 8.69). Retention rate was 72.2% at post-assessment. Child-, parent- and therapy-level variables were entered as predictors of multi-informant reported change in externalising behaviour and social skills. Behavioural improvements following PMTO amounted to 1 standard deviation on parent rated and ½ standard deviation on teacher rated externalising behaviour, while social skills improvements were more modest. Results suggested that children with higher symptom scores and lower social skills score at pre-treatment were more likely to show improvements in these areas. According to both parent- and teacher-ratings, girls tended to show greater improvements in externalising behaviour and social skills following treatment and, according to parents, ADHD symptomology appeared to inhibit improvements in social skills. Finally, observed increases in parental skill encouragement, therapists' satisfaction with treatment and the number of hours spent in therapy by children were also positive and significant predictors of child outcomes. © 2016 International Union of Psychological Science.
Mills, Kathryn; Hunt, Michael A; Ferber, Reed
2013-10-01
To identify which gait deviations are consistently associated with knee osteoarthritis (KOA) and how these are influenced by disease severity, the involved compartment, and sex. Five electronic databases and reference lists of publications were searched. Cross-sectional, observational studies comparing temporospatial variables, joint kinematics, and joint moments between individuals with KOA and healthy controls or between KOA subgroups were considered for review. Only publications scoring ≥50% on a modified methodology quality index were included. Because of the number of gait deviations examined, only biomechanical variables reported by ≥4 publications were further analyzed. Where possible, a meta-analysis was performed using effect sizes (ES) calculated from discrete variables. In total, 41 publications examining 20 variables were included. The majority of consistent gait deviations associated with KOA were exhibited by those with severe disease in the temporospatial domain. Individuals with severe KOA exhibited greater stride duration than controls (ES 1.35 [95% confidence interval (95% CI) 1.03, 1.67]) and a decrease in cadence (ES -0.75 [95% CI -1.12, -0.39]) compared with controls. The evidence for kinematic and joint moment change was primarily limited or conflicting. There was a lack of evidence for alterations in the external knee adduction moment. Individuals with KOA exhibit a range of gait deviations compared with controls. Despite its common usage in KOA gait studies, we did not find consistent evidence that knee adduction moment differs between those with and without KOA or between disease severity levels. Further research examining the reasons for a lack of difference in many gait variables in those with knee OA is needed. Copyright © 2013 by the American College of Rheumatology.
Exploring Statistical Characterizations of Morphologic Change and Variability: Fire Island, New York
NASA Astrophysics Data System (ADS)
Lentz, E. E.; Hapke, C. J.
2012-12-01
A comprehensive understanding of coastal barrier behavior requires high-resolution observations that capture a wide range of morphological changes occurring over a range of spatial and temporal scales. Fire Island National Seashore, located along the coast of Long Island, New York, is a well studied barrier island coast where understanding how morphological changes contribute to barrier island vulnerability have important implications for coastal land management. Previous work has shown that morphologic differences in eastern and western reaches are attributable to the underlying geology and variations sediment transport in the system. In this study, we further explore western and eastern differences and variability with lidar-derived topographic surfaces to provide a unique and comprehensive investigation of dune-beach change at Fire Island, New York. Continuous topographic surfaces generated from 12 lidar surveys collected between 1998 and 2011 are used to examine the three-dimensional variability over a range of time periods over the 50 km long island. Because surveys were collected over a range of seasons and in response to a number of storm events, we explore morphologic configurations reflecting the seasonality, post-storm configuration, and replenishment response to the system through the generation of a representative or average surface. These averaged surfaces provide the context for what would be an expected or typical coastal configuration under certain conditions, and through comparison with an individual event, can be used to derive an event-specific spatial-change signature. To investigate anthropogenic influences, differences in morphology between a survey collected after a substantial beach replenishment project and a typical fair-weather configuration averaged from six surveys are determined. Storm response variations are also explored by assessing differences between Tropical Storm Irene (2011), Nor'Ida (2009), and a typical post-storm configuration averaged from five post-storm surveys. In addition to averaged surfaces, surveys are combined to generate a new raster surface reflecting cell by cell standard deviations over a defined period. Standard deviation surfaces are generated to highlight 1) where areas of highest and lowest morphologic variation are located over the entire period, and 2) whether spatial similarities exist in variability between storm and non-storm morphologies. Results show there are distinct and variable responses in eastern and western reaches attributable to wave climate, profile gradient, and offshore bathymetry, as well as to a general along-coast increase in sediment availability.
An, Shasha; Zheng, Xiaoming; Li, Zhifang; Wang, Yang; Wu, Yuntao; Zhang, Wenyan; Zhao, Haiyan; Wu, Aiping; Wang, Ruixia; Tao, Jie; Gao, Xinying; Wu, Shouling
2015-11-01
To investigate the correlation between long time systolic blood pressure variability(SBPV)and short time SBPV in aged population. A total of 752 subjects aged ≥60 years of Kailuan Group who took part in 2006-2007, 2008-2009, 2010-2011 and 2012-2013 health examination were included by cluster sampling method.Long time SBPV was calculated by standard deviation of mean systolic blood pressure measured in 2006-2007, 2008-2009, 2010-2011 and 2012-2013, standard deviation represents short time systolic blood pressure which is derived from 24 hour ambulatory blood pressure monitoring. The observation population was divided into three groups according to the third tertiles of the time systolic blood pressure variability: the first point(<9.09 mmHg (1 mmHg=0.133 kPa)), second point (≥9.09 mmHg, and <14.29 mmHg), and third point (≥14.29 mmHg). Multivariate logistic regression analysis was used to analyze the correlation between long time systolic blood pressure variability and short time systolic blood pressure. (1) The participants' age were (67.0±5.7) years old (284 women). (2) The 24 hours and daytime SSD were (14.7±4.0) mmHg, (14.7±3.5) mmHg, (15.7±4.4) mmHg (P=0.010) and (14.1±4.4) mmHg, (14.2±3.5) mmHg and (15.4±4.6) mmHg (P<0.001) according to the tertiles of long time systolic blood pressure variability, respectively, nighttime SSD were (12.0±4.4) mmHg, (11.8±4.8) mmHg and (11.9±4.9) mmHg (P=0.900). (3) Multiple logistic regression analysis showed that the tertiles of long time SSD was the risk factor for increasing daytime SSD>14.00 mmHg (OR=1.51, 95%CI: 1.03-2.23, P=0.037), but not a risk factor for increasing 24 hours SSD>14.41 mmHg (OR=1.10, 95%CI: 0.75-1.61, P=0.639) and nighttime SSD>11.11 mmHg (OR=0.98, 95%CI: 0.67-1.42, P=0.899). Increased long time SBPV is a risk factor for increasing daytime SBPV.
Epidemiological overview of HIV/AIDS in pregnant women from a state of northeastern Brazil.
Silva, Claúdia Mendes da; Alves, Regina de Souza; Santos, Tâmyssa Simões Dos; Bragagnollo, Gabriela Rodrigues; Tavares, Clodis Maria; Santos, Amuzza Aylla Pereira Dos
2018-01-01
To learn the epidemiological characteristics of HIV infection in pregnant women. Descriptive study with quantitative approach. The study population was composed of pregnant women with HIV/AIDS residing in the state of Alagoas. Data were organized into variables and analyzed according to the measures of dispersion parameter relevant to the arithmetic mean and standard deviation (X ± S). Between 2007 and 2015, 773 cases of HIV/AIDS were recorded in pregnant women in Alagoas. The studied variables identified that most of these pregnant women were young, had low levels of education and faced socioeconomic vulnerability. It is necessary to include actions aimed at increasing the attention paid to women, once the assurance of full care and early diagnosis of HIV are important strategies to promote adequate treatment adherence and reduce the vertical transmission.
NASA Technical Reports Server (NTRS)
Hargraves, W. R.; Delulio, E. B.; Justus, C. G.
1977-01-01
The Global Reference Atmospheric Model is used along with the revised perturbation statistics to evaluate and computer graph various atmospheric statistics along a space shuttle reference mission and abort trajectory. The trajectory plots are height vs. ground range, with height from ground level to 155 km and ground range along the reentry trajectory. Cross sectional plots, height vs. latitude or longitude, are also generated for 80 deg longitude, with heights from 30 km to 90 km and latitude from -90 deg to +90 deg, and for 45 deg latitude, with heights from 30 km to 90 km and longitudes from 180 deg E to 180 deg W. The variables plotted are monthly average pressure, density, temperature, wind components, and wind speed and standard deviations and 99th inter-percentile range for each of these variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.
A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less
Barbieri, Carlo; Molina, Manuel; Ponce, Pedro; Tothova, Monika; Cattinelli, Isabella; Ion Titapiccolo, Jasmine; Mari, Flavio; Amato, Claudia; Leipold, Frank; Wehmeyer, Wolfgang; Stuard, Stefano; Stopper, Andrea; Canaud, Bernard
2016-08-01
Managing anemia in hemodialysis patients can be challenging because of competing therapeutic targets and individual variability. Because therapy recommendations provided by a decision support system can benefit both patients and doctors, we evaluated the impact of an artificial intelligence decision support system, the Anemia Control Model (ACM), on anemia outcomes. Based on patient profiles, the ACM was built to recommend suitable erythropoietic-stimulating agent doses. Our retrospective study consisted of a 12-month control phase (standard anemia care), followed by a 12-month observation phase (ACM-guided care) encompassing 752 patients undergoing hemodialysis therapy in 3 NephroCare clinics located in separate countries. The percentage of hemoglobin values on target, the median darbepoetin dose, and individual hemoglobin fluctuation (estimated from the intrapatient hemoglobin standard deviation) were deemed primary outcomes. In the observation phase, median darbepoetin consumption significantly decreased from 0.63 to 0.46 μg/kg/month, whereas on-target hemoglobin values significantly increased from 70.6% to 76.6%, reaching 83.2% when the ACM suggestions were implemented. Moreover, ACM introduction led to a significant decrease in hemoglobin fluctuation (intrapatient standard deviation decreased from 0.95 g/dl to 0.83 g/dl). Thus, ACM support helped improve anemia outcomes of hemodialysis patients, minimizing erythropoietic-stimulating agent use with the potential to reduce the cost of treatment. Copyright © 2016 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.
Zaugg, Steven D.; Smith, Steven G.; Schroeder, Michael P.
2006-01-01
A method for the determination of 69 compounds typically found in domestic and industrial wastewater is described. The method was developed in response to increasing concern over the impact of endocrine-disrupting chemicals on aquatic organisms in wastewater. This method also is useful for evaluating the effects of combined sanitary and storm-sewer overflow on the water quality of urban streams. The method focuses on the determination of compounds that are indicators of wastewater or have endocrine-disrupting potential. These compounds include the alkylphenol ethoxylate nonionic surfactants, food additives, fragrances, antioxidants, flame retardants, plasticizers, industrial solvents, disinfectants, fecal sterols, polycyclic aromatic hydrocarbons, and high-use domestic pesticides. Wastewater compounds in whole-water samples were extracted using continuous liquid-liquid extractors and methylene chloride solvent, and then determined by capillary-column gas chromatography/mass spectrometry. Recoveries in reagent-water samples fortified at 0.5 microgram per liter averaged 72 percent ? 8 percent relative standard deviation. The concentration of 21 compounds is always reported as estimated because method recovery was less than 60 percent, variability was greater than 25 percent relative standard deviation, or standard reference compounds were prepared from technical mixtures. Initial method detection limits averaged 0.18 microgram per liter. Samples were preserved by adding 60 grams of sodium chloride and stored at 4 degrees Celsius. The laboratory established a sample holding-time limit prior to sample extraction of 14 days from the date of collection.
Evaluation of the Ozone Fields in NASA's MERRA-2 Reanalysis
NASA Technical Reports Server (NTRS)
Wargan, Krzysztof; Pawson, Steven; Labow, Gordon; Frith, Stacey M.; Livesey, Nathaniel; Partyka, Gary
2017-01-01
The assimilated ozone product from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2), produced at NASAs Global Modeling and Assimilation Office (GMAO) is summarized. The reanalysis begins in 1980 with the use of retrieved partial-column ozone concentrations from a series of Solar Backscatter Ultraviolet Radiometer (SBUV) instruments on NASA and NOAA spacecraft. Beginning in October 2004, retrieved ozone profiles from the Microwave Limb Sounder (MLS) and total column ozone from the Ozone Monitoring Instrument (OMI) on NASAs EOS Aura satellite are assimilated. While this change in data streams does lead to a discontinuity in the assimilated ozone fields in MERRA-2, making it not useful for studies in decadal (secular) trends in ozone, this choice was made to prioritize demonstrating the value NASAs high-quality research data in the reanalysis context. The MERRA-2 ozone is compared with independent satellite and ozonesonde data, focusing on the representation of the spatial and temporal variability of stratospheric and upper-tropospheric ozone. The comparisons show agreement within 10 (standard deviation of the difference) between MERRA-2 profiles and independent satellite data in most of the stratosphere. The agreement improves after 2004, when EOS Aura data are assimilated. The standard deviation of the differences between the lower-stratospheric and upper-tropospheric MERRA-2 ozone and ozonesondes is 11.2 and 24.5, respectively, with correlations of 0.8 and above. This is indicative of a realistic representation of the UTLS ozone variability in MERRA-2. After 2004, the upper tropospheric ozone in MERRA-2 shows a low bias compared to the sondes, but the covariance with independent observations is improved compared to earlier years. Case studies demonstrate the integrity of MERRA-2 analyses in representing important features such as tropopause folds.
Third molar development by measurements of open apices in an Italian sample of living subjects.
De Luca, Stefano; Pacifici, Andrea; Pacifici, Luciano; Polimeni, Antonella; Fischetto, Sara Giulia; Velandia Palacio, Luz Andrea; Vanin, Stefano; Cameriere, Roberto
2016-02-01
The aim of this study is to analyse the age-predicting performance of third molar index (I3M) in dental age estimation. A multiple regression analysis was developed with chronological age as the independent variable. In order to investigate the relationship between the I3M and chronological age, the standard deviation and relative error were examined. Digitalized orthopantomographs (OPTs) of 975 Italian healthy subjects (531 female and 444 male), aged between 9 and 22 years, were studied. Third molar development was determined according to Cameriere et al. (2008). Analysis of covariance (ANCOVA) was applied to study the interaction between I3M and the gender. The difference between age and third molar index (I3M) was tested with Pearson's correlation coefficient. The I3M, the age and the gender of the subjects were used as predictive variable for age estimation. The small F-value for the gender (F = 0.042, p = 0.837) reveals that this factor does not affect the growth of the third molar. Adjusted R(2) (AdjR(2)) was used as parameter to define the best fitting function. All the regression models (linear, exponential, and polynomial) showed a similar AdjR(2). The polynomial (2nd order) fitting explains about the 78% of the total variance and do not add any relevant clinical information to the age estimation process from the third molar. The standard deviation and relative error increase with the age. The I3M has its minimum in the younger group of studied individuals and its maximum in the oldest ones, indicating that its precision and reliability decrease with the age. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Huang, Wei; Zhu, Tong; Pan, Xiaochuan; Hu, Min; Lu, Shou-En; Lin, Yong; Wang, Tong; Zhang, Yuanhang; Tang, Xiaoyan
2012-01-01
The authors conducted a 2-year follow-up of 40 cardiovascular disease patients (mean age = 65.6 years (standard deviation, 5.8)) who underwent repeated measurements of cardiovascular response before and during the 2008 Beijing Olympics (Beijing, China), when air pollution was strictly controlled. Ambient levels of particulate matter with an aerodynamic diameter less than 2.5 µm (PM2.5), black carbon, nitrogen dioxide, sulfur dioxide, ozone, and carbon monoxide were measured continuously, with validation of concurrent real-time measurements of personal exposure to PM2.5 and carbon monoxide. Linear mixed-effects models were used with adjustment for individual risk factors, time-varying factors, and meteorologic effects. Significant heart rate variability reduction and blood pressure elevation were observed in association with exposure to air pollution. Specifically, interquartile-range increases of 51.8 µg/m3, 2.02 µg/m3, and 13.7 ppb in prior 4-hour exposure to PM2.5, black carbon, and nitrogen dioxide were associated with significant reductions in the standard deviation of the normal-to-normal intervals of 4.2% (95% confidence interval (CI): 1.9, 6.4), 4.2% (95% CI: 1.8, 6.6), and 3.9% (95% CI: 2.2, 5.7), respectively. Greater heart rate variability declines were observed among subjects with C-reactive protein values above the 90th percentile, subjects with a body mass index greater than 25, and females. The authors conclude that autonomic and vascular dysfunction may be one of the mechanisms through which air pollution exposure can increase cardiovascular disease risk, especially among persons with systemic inflammation and overweight. PMID:22763390
Narad, Megan; Garner, Annie A.; Brassell, Anne A.; Saxby, Dyani; Antonini, Tanya N.; O'Brien, Kathleen M.; Tamm, Leanne; Matthews, Gerald; Epstein, Jeffery N.
2013-01-01
Importance This study extends the literature regarding Attention-Deficit/Hyperactivity Disorder (ADHD) related driving impairments to a newly-licensed, adolescent population. Objective To investigate the combined risks of adolescence, ADHD, and distracted driving (cell phone conversation and text messaging) on driving performance. Design Adolescents with and without ADHD engaged in a simulated drive under three conditions (no distraction, cell phone conversation, texting). During each condition, one unexpected event (e.g., car suddenly merging into driver's lane) was introduced. Setting Driving simulator. Participants Adolescents aged 16–17 with ADHD (n=28) and controls (n=33). Interventions/Main Exposures Cell phone conversation, texting, and no distraction while driving. Outcome Measures Self-report of driving history; Average speed, standard deviation of speed, standard deviation of lateral position, braking reaction time during driving simulation. Results Adolescents with ADHD reported fewer months of driving experience and a higher proportion of driving violations than controls. After controlling for months of driving history, adolescents with ADHD demonstrated more variability in speed and lane position than controls. There were no group differences for braking reaction time. Further, texting negatively impacted the driving performance of all participants as evidenced by increased variability in speed and lane position. Conclusions This study, one of the first to investigate distracted driving in adolescents with ADHD, adds to a growing body of literature documenting that individuals with ADHD are at increased risk for negative driving outcomes. Furthermore, texting significantly impairs the driving performance of all adolescents and increases existing driving-related impairment in adolescents with ADHD, highlighting the need for education and enforcement of regulations against texting for this age group. PMID:23939758
Depression and anxiety as predictors of heart rate variability after myocardial infarction.
Martens, E J; Nyklícek, I; Szabó, B M; Kupper, N
2008-03-01
Reduced heart rate variability (HRV) is a prognostic factor for cardiac mortality. Both depression and anxiety have been associated with increased risk for mortality in cardiac patients. Low HRV may act as an intermediary in this association. The present study examined to what extent depression and anxiety differently predict 24-h HRV indices recorded post-myocardial infarction (MI). Ninety-three patients were recruited during hospitalization for MI and assessed on self-reported symptoms of depression and anxiety. Two months post-MI, patients were assessed on clinical diagnoses of lifetime depressive and anxiety disorder. Adequate 24-h ambulatory electrocardiography data were obtained from 82 patients on average 78 days post-MI. In unadjusted analyses, lifetime diagnoses of major depressive disorder was predictive of lower SDNN [standard deviation of all normal-to-normal (NN) intervals; beta=-0.26, p=0.022] and SDANN (standard deviation of all 5-min mean NN intervals; beta=0.25, p=0.023), and lifetime anxiety disorder of lower RMSSD (root mean square of successive differences; beta=-0.23, p=0.039). Depression and anxiety symptoms did not significantly predict HRV. After adjustment for age, sex, cardiac history and multi-vessel disease, lifetime depressive disorder was no longer predictive of HRV. Lifetime anxiety disorder predicted reduced high-frequency spectral power (beta=-0.22, p=0.039) and RMSSD (beta=-0.25, p=0.019), even after additional adjustment of anxiety symptoms. Clinical anxiety, but not depression, negatively influenced parasympathetic modulation of heart rate in post-MI patients. These findings elucidate the physiological mechanisms underlying anxiety as a risk factor for adverse outcomes, but also raise questions about the potential role of HRV as an intermediary between depression and post-MI prognosis.
Assessment of Tandem Measurements of pH and Total Gut Transit Time in Healthy Volunteers.
Mikolajczyk, Adam E; Watson, Sydeaka; Surma, Bonnie L; Rubin, David T
2015-07-09
The variation of luminal pH and transit time in an individual is unknown, yet is necessary to interpret single measurements. This study aimed to assess the intrasubject variability of gut pH and transit time in healthy volunteers using SmartPill devices (Covidien, Minneapolis, MN). Each subject (n=10) ingested two SmartPill devices separated by 24 h. Mean pH values were calculated for 30 min after gastric emptying (AGE), before the ileocecal (BIC) valve, after the ileocecal (AIC) valve, and before body exit (BBE). Intrasubject variability was determined by comparing mean values from both ingestions for an individual subject using standard deviations, 95% limits of agreement, and Bland-Altman plots. Tandem device ingestion occurred without complication. The median (full range) intrasubject standard deviations for pH were 0.02 (0.0002-0.2048) for AGE, 0.06 (0.0002-0.3445) for BIC, 0.14 (0.0018-0.3042) for AIC, and 0.08 (0.0098-0.5202) for BBE. There was a significant change in pH for AIC (mean difference: -0.45±0.31, P=0.0015) observed across all subjects. The mean coefficients of variation for transit time were 12.0±7.4% and 25.8±15.8% for small and large bowels, respectively (P=0.01). This study demonstrates the safety and feasibility of tandem gut transit and pH assessments using the SmartPill device. In healthy individuals and over 24 h, the gut pH profile does not markedly fluctuate in a given region with more variation seen in the colon compared with the small bowel, which has important implications for future physiology and drug delivery studies.
Chronobiology of death in heart failure.
Ribas, Nuria; Domingo, Maite; Gastelurrutia, Paloma; Ferrero-Gregori, Andreu; Rull, Pilar; Noguero, Mariana; Garcia, Carmen; Puig, Teresa; Cinca, Juan; Bayes-Genis, Antoni
2014-05-01
In the general population, heart events occur more often during early morning, on Mondays, and during winter. However, the chronobiology of death in heart failure has not been analyzed. The aim of this study was to determine the circadian, day of the week, and seasonal variability of all-cause mortality in chronic heart failure. This was an analysis of all consecutive heart failure patients followed in a heart failure unit from January 2003 to December 2008. The circadian moment of death was analyzed at 6-h intervals and was determined by reviewing medical records and by information provided by the relatives. Of 1196 patients (mean [standard deviation] age, 69 [13] years; 62% male), 418 (34.9%) died during a mean (standard deviation) follow-up of 29 (21) months. Survivors were younger, had higher body mass index, left ventricular ejection fraction, glomerular filtration rate, hemoglobin and sodium levels, and lower Framingham risk scores, amino-terminal pro-B type natriuretic peptide, troponin T, and urate values. They were more frequently treated with angiotensin receptor blockers, beta-blockers, mineralocorticoids receptor antagonists, digoxin, nitrates, hydralazine, statins, loop diuretics, and thiazides. The analysis of the circadian and weekly variability did not reveal significant differences between the four 6-h intervals or the days of the week. Mortality occurred more frequently during the winter (30.6%) compared with the other seasons (P = .024). All cause mortality does not follow a circadian pattern, but a seasonal rhythm in patients with heart failure. This finding is in contrast to the circadian rhythmicity of cardiovascular events reported in the general population. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
Parikh, Rajul S; Parikh, Shefali R; Kumar, Rajesh S; Prabakaran, S; Babu, J Gansesh; Thomas, Ravi
2008-07-01
To evaluate the diagnostic ability of scanning laser polarimetry (GDx variable corneal compensator [VCC]) for early glaucoma in Asian Indian eyes. Cross-sectional observational study. Two groups of patients (early glaucoma and normal) who satisfied the inclusion and exclusion criteria were included. Early glaucoma was diagnosed in presence of open angles, characteristic glaucomatous optic disc changes correlating with the visual field (VF) on automated perimetry (VF defect fulfilling at least 2 of 3 Anderson and Patella's criteria with mean deviation >or= -6 decibels). Normal subjects had visual acuity >or= 20/30 and intraocular pressure < 22 mmHg, with a normal optic disc and fields and no ocular abnormality. All patients underwent complete ophthalmic evaluation, including VF examination (24-2/30-2 Swedish interactive threshold algorithm standard program) and imaging with GDx VCC. Sensitivity, specificity, positive predictive value and negative predictive value, area under the receiving operating characteristic curve, and likelihood ratios (LRs) were calculated for various GDx VCC parameters. Seventy-four eyes (74 patients) with early glaucoma and 104 eyes (104 normal subjects) were enrolled. TSNIT Std Dev (temporal-superior-nasal-inferior-temporal standard deviation) had the best combination of sensitivity and specificity-61.3 and 95.2, respectively-followed by nerve fiber index score > 50 (sensitivity, 52.7%; specificity, 99%). Nerve fiber index score > 50 had positive and negative predictive values of 74.3% and 97.6%, respectively, for an assumed glaucoma prevalence of 5%. Nerve fiber index score > 50 had a positive LR (+LR) of 54.8 for early glaucoma. GDx VCC has moderate sensitivity, with high specificity, in the diagnosis of early glaucoma. The high +LR for the nerve fiber index score can provide valuable diagnostic information for individual patients.
Ladapo, Joseph A; Elliott, Marc N; Bogart, Laura M; Kanouse, David E; Vestal, Katherine D; Klein, David J; Ratner, Jessica A; Schuster, Mark A
2013-11-01
To examine the cost and cost-effectiveness of implementing Talking Parents, Healthy Teens, a worksite-based parenting program designed to help parents address sexual health with their adolescent children. We enrolled 535 parents with adolescent children at 13 worksites in southern California in a randomized trial. We used time and wage data from employees involved in implementing the program to estimate fixed and variable costs. We determined cost-effectiveness with nonparametric bootstrap analysis. For the intervention, parents participated in eight weekly 1-hour teaching sessions at lunchtime. The program included games, discussions, role plays, and videotaped role plays to help parents learn to communicate with their children about sex-related topics, teach their children assertiveness and decision-making skills, and supervise and interact with their children more effectively. Implementing the program cost $543.03 (standard deviation, $289.98) per worksite in fixed costs, and $28.05 per parent (standard deviation, $4.08) in variable costs. At 9 months, this $28.05 investment per parent yielded improvements in number of sexual health topics discussed, condom teaching, and communication quality and openness. The cost-effectiveness was $7.42 per new topic discussed using parental responses and $9.18 using adolescent responses. Other efficacy outcomes also yielded favorable cost-effectiveness ratios. Talking Parents, Healthy Teens demonstrated the feasibility and cost-effectiveness of a worksite-based parenting program to promote parent-adolescent communication about sexual health. Its cost is reasonable and is unlikely to be a significant barrier to adoption and diffusion for most worksites considering its implementation. Copyright © 2013 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Autonomic cardiovascular control recovery in quadriplegics after handcycle training.
Abreu, Elizângela Márcia de Carvalho; Alves, Rani de Souza; Borges, Ana Carolina Lacerda; Lima, Fernanda Pupio Silva; Júnior, Alderico Rodrigues de Paula; Lima, Mário Oliveira
2016-07-01
The aim of this study was to investigate the cardiovascular autonomic acute response, during recovery after handcycle training, in quadriplegics with spinal cord injury (SCI). [Subjects and Methods] Seven quadriplegics (SCIG -level C6-C7, male, age 28.00 ± 6.97 years) and eight healthy subjects (CG -male, age 25.00 ± 7.38 years) were studied. Their heart rate variability (HRV) was assessed before and after one handcycle training. [Results] After the training, the SCIG showed significantly reduced: intervals between R waves of the electrocardiogram (RR), standard deviation of the NN intervals (SDNN), square root of the mean squares differences of sucessive NN intervals (rMSSD), low frequency power (LF), high frequency power (HF), and Poincaré plot (standard deviation of short-term HRV -SD1 and standard deviation of long-term HRV -SD2). The SDNN, LF, and SD2 remained decreased during the recovery time. The CG showed significantly reduced: RR, rMSSD, number of pairs of adjacent NN intervals differing by more than 50 ms (pNN50), LF, HF, SD1, and sample entropy (SampEn). Among these parameters, only RR remained decreased during recovery time. Comparisons of the means of HRV parameters evaluated between the CG and SCIG showed that the SCIG had significantly lower pNN50, LF, HF, and SampEn before training, while immediately after training, the SCIG had significantly lower SDNN, LF, HF, and SD2. The rMSSD30s of the SCIG significantly reduced in the windows 180 and 330 seconds and between the windows 300 seconds in the CG. [Conclusion] There was a reduction of sympathetic and parasympathetic activity in the recovery period after the training in both groups; however, the CG showed a higher HRV. The parasympathetic activity also gradually increased after training, and in the SCIG, this activity remained reduced even at three minutes after the end of training, which suggests a deficiency in parasympathetic reactivation in quadriplegics after SCI.
Gale, Catharine R; Cooper, Rachel; Craig, Leone; Elliott, Jane; Kuh, Diana; Richards, Marcus; Starr, John M; Whalley, Lawrence J; Deary, Ian J
2012-01-01
Poorer cognitive ability in youth is a risk factor for later mental health problems but it is largely unknown whether cognitive ability, in youth or in later life, is predictive of mental wellbeing. The purpose of this study was to investigate whether cognitive ability at age 11 years, cognitive ability in later life, or lifetime cognitive change are associated with mental wellbeing in older people. We used data on 8191 men and women aged 50 to 87 years from four cohorts in the HALCyon collaborative research programme into healthy ageing: the Aberdeen Birth Cohort 1936, the Lothian Birth Cohort 1921, the National Child Development Survey, and the MRC National Survey for Health and Development. We used linear regression to examine associations between cognitive ability at age 11, cognitive ability in later life, and lifetime change in cognitive ability and mean score on the Warwick Edinburgh Mental Wellbeing Scale and meta-analysis to obtain an overall estimate of the effect of each. People whose cognitive ability at age 11 was a standard deviation above the mean scored 0.53 points higher on the mental wellbeing scale (95% confidence interval 0.36, 0.71). The equivalent value for cognitive ability in later life was 0.89 points (0.72, 1.07). A standard deviation improvement in cognitive ability in later life relative to childhood ability was associated with 0.66 points (0.39, 0.93) advantage in wellbeing score. These effect sizes equate to around 0.1 of a standard deviation in mental wellbeing score. Adjustment for potential confounding and mediating variables, primarily the personality trait neuroticism, substantially attenuated these associations. Associations between cognitive ability in childhood or lifetime cognitive change and mental wellbeing in older people are slight and may be confounded by personality trait differences.
Mustafa, Gulgun; Kursat, Fidanci Muzaffer; Ahmet, Tas; Alparslan, Genc Fatih; Omer, Gunes; Sertoglu, Erdem; Erkan, Sarı; Ediz, Yesilkaya; Turker, Turker; Ayhan, Kılıc
Childhood obesity is a worldwide health concern. Studies have shown autonomic dysfunction in obese children. The exact mechanism of this dysfunction is still unknown. The aim of this study was to assess the relationship between erythrocyte membrane fatty acid (EMFA) levels and cardiac autonomic function in obese children using heart rate variability (HRV). A total of 48 obese and 32 healthy children were included in this case-control study. Anthropometric and biochemical data, HRV indices, and EMFA levels in both groups were compared statistically. HRV parameters including standard deviation of normal-to-normal R-R intervals (NN), root mean square of successive differences, the number of pairs of successive NNs that differ by >50 ms (NN50), the proportion of NN50 divided by the total number of NNs, high-frequency power, and low-frequency power were lower in obese children compared to controls, implying parasympathetic impairment. Eicosapentaenoic acid and docosahexaenoic acid levels were lower in the obese group (p<0.001 and p=0.012, respectively). In correlation analysis, in the obese group, body mass index standard deviation and linoleic acid, arachidonic acid, triglycerides, and high-density lipoprotein levels showed a linear correlation with one or more HRV parameter, and age, eicosapentaenoic acid, and systolic and diastolic blood pressure correlated with mean heart rate. In linear regression analysis, age, dihomo-gamma-linolenic acid, linoleic acid, arachidonic acid, body mass index standard deviation, systolic blood pressure, triglycerides, low-density lipoprotein and high-density lipoprotein were related to HRV parameters, implying an effect on cardiac autonomic function. There is impairment of cardiac autonomic function in obese children. It appears that levels of EMFAs such as linoleic acid, arachidonic acid and dihomo-gamma-linolenic acid play a role in the regulation of cardiac autonomic function in obese children. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
Selection and Classification Using a Forecast Applicant Pool.
ERIC Educational Resources Information Center
Hendrix, William H.
The document presents a forecast model of the future Air Force applicant pool. By forecasting applicants' quality (means and standard deviations of aptitude scores) and quantity (total number of applicants), a potential enlistee could be compared to the forecasted pool. The data used to develop the model consisted of means, standard deviation, and…
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Wavelength selection method with standard deviation: application to pulse oximetry.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
2011-07-01
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.