49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.
Code of Federal Regulations, 2013 CFR
2013-10-01
... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...
49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.
Code of Federal Regulations, 2011 CFR
2011-10-01
... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...
49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.
Code of Federal Regulations, 2014 CFR
2014-10-01
... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...
49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.
Code of Federal Regulations, 2012 CFR
2012-10-01
... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust emissions at the end of the model year for passenger... for sale, and certifying model types to standards as defined in § 86.1818-12. The model type carbon...
49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 6 2010-10-01 2010-10-01 false Termination of exemption; amendment of alternative average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS...
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...
How Young Is Standard Average European?
ERIC Educational Resources Information Center
Haspelmath, Martin
1998-01-01
An analysis of Standard Average European, a European linguistic area, looks at 11 of its features (definite, indefinite articles, have-perfect, participial passive, antiaccusative prominence, nominative experiencers, dative external possessors, negation/negative pronouns, particle comparatives, A-and-B conjunction, relative clauses, verb fronting…
Cosmological ensemble and directional averages of observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonvin, Camille; Clarkson, Chris; Durrer, Ruth
We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmologicalmore » observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.« less
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
NASA Astrophysics Data System (ADS)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-01
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-17
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less
Academic status and progress of deaf and hard-of-hearing students in general education classrooms.
Antia, Shirin D; Jones, Patricia B; Reed, Susanne; Kreimeyer, Kathryn H
2009-01-01
The study participants were 197 deaf or hard-of-hearing students with mild to profound hearing loss who attended general education classes for 2 or more hours per day. We obtained scores on standardized achievement tests of math, reading, and language/writing, and standardized teacher's ratings of academic competence annually, for 5 years, together with other demographic and communication data. Results on standardized achievement tests indicated that, over the 5-year period, 63%-79% of students scored in the average or above-average range in math, 48%-68% in reading, and 55%-76% in language/writing. The standardized test scores for the group were, on average, half an SD below hearing norms. Average student progress in each subject area was consistent with or better than that made by the norm group of hearing students, and 79%-81% of students made one or more year's progress annually. Teachers rated 69%-81% of students as average or above average in academic competence over the 5 years. The teacher's ratings also indicated that 89% of students made average or above-average progress. Students' expressive and receptive communication, classroom participation, communication mode, and parental participation in school were significantly, but moderately, related to academic outcomes.
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 6 2010-10-01 2010-10-01 false Fuel economy standards. 531.5 Section 531.5 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel economy standards. (a) Except as provided...
Jin, Mengtong; Sun, Wenshuo; Li, Qin; Sun, Xiaohong; Pan, Yingjie; Zhao, Yong
2014-04-04
We evaluated the difference of three standard curves in quantifying viable Vibrio parahaemolyticus in samples by real-time reverse-transcriptase PCR (Real-time RT-PCR). The standard curve A was established by 10-fold diluted cDNA. The cDNA was reverse transcripted after RNA synthesized in vitro. The standard curve B and C were established by 10-fold diluted cDNA. The cDNA was synthesized after RNA isolated from Vibrio parahaemolyticus in pure cultures (10(8) CFU/mL) and shrimp samples (10(6) CFU/g) (Standard curve A and C were proposed for the first time). Three standard curves were performed to quantitatively detect V. parahaemolyticus in six samples, respectively (Two pure cultured V. parahaemolyticus samples, two artificially contaminated cooked Litopenaeus vannamei samples and two artificially contaminated Litopenaeus vannamei samples). Then we evaluated the quantitative results of standard curve and the plate counting results and then analysed the differences. The three standard curves all show a strong linear relationship between the fractional cycle number and V. parahaemolyticus concentration (R2 > 0.99); The quantitative results of Real-time PCR were significantly (p < 0.05) lower than the results of plate counting. The relative errors compared with the results of plate counting ranked standard curve A (30.0%) > standard curve C (18.8%) > standard curve B (6.9%); The average differences between standard curve A and standard curve B and C were - 2.25 Lg CFU/mL and - 0.75 Lg CFU/mL, respectively, and the mean relative errors were 48.2% and 15.9%, respectively; The average difference between standard curve B and C was among (1.47 -1.53) Lg CFU/mL and the average relative errors were among 19.0% - 23.8%. Standard curve B could be applied to Real-time RT-PCR when quantify the number of viable microorganisms in samples.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Salvati, Louis M; McClure, Sean C; Reddy, Todime M; Cellar, Nicholas A
2016-05-01
This method provides simultaneous determination of total vitamins B1, B2, B3, and B6 in infant formula and related nutritionals (adult and infant). The method was given First Action for vitamins B1, B2, and B6, but not B3, during the AOAC Annual Meeting in September 2015. The method uses acid phosphatase to dephosphorylate the phosphorylated vitamin forms. It then measures thiamine (vitamin B1); riboflavin (vitamin B2); nicotinamide and nicotinic acid (vitamin B3); and pyridoxine, pyridoxal, and pyridoxamine (vitamin B6) from digested sample extract by liquid chromatography-tandem mass spectrometry. A single-laboratory validation was performed on 14 matrixes provided by the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals (SPIFAN) to demonstrate method effectiveness. The method met requirements of the AOAC SPIFAN Standard Method Performance Requirement for each of the three vitamins, including average over-spike recovery of 99.6 ± 3.5%, average repeatability of 1.5 ± 0.8% relative standard deviation, and average intermediate precision of 3.9 ± 1.3% relative standard deviation.
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
Trowbridge, Philip R; Kahl, J Steve; Sassan, Dari A; Heath, Douglas L; Walsh, Edward M
2010-07-01
Six watersheds in New Hampshire were studied to determine the effects of road salt on stream water quality. Specific conductance in streams was monitored every 15 min for one year using dataloggers. Chloride concentrations were calculated from specific conductance using empirical relationships. Stream chloride concentrations were directly correlated with development in the watersheds and were inversely related to streamflow. Exceedances of the EPA water quality standard for chloride were detected in the four watersheds with the most development. The number of exceedances during a year was linearly related to the annual average concentration of chloride. Exceedances of the water quality standard were not predicted for streams with annual average concentrations less than 102 mg L(-1). Chloride was imported into three of the watersheds at rates ranging from 45 to 98 Mg Cl km(-2) yr(-1). Ninety-one percent of the chloride imported was road salt for deicing roadways and parking lots. A simple, mass balance equation was shown to predict annual average chloride concentrations from streamflow and chloride import rates to the watershed. This equation, combined with the apparent threshold for exceedances of the water quality standard, can be used for screening-level TMDLs for road salt in impaired watersheds.
NASA Astrophysics Data System (ADS)
Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.
2018-03-01
Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... parts of the risk adjustment process--the risk adjustment model, the calculation of plan average... risk adjustment process. The risk adjustment model calculates individual risk scores. The calculation...'' to mean all data that are used in a risk adjustment model, the calculation of plan average actuarial...
77 FR 26773 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-07
... enhanced and standard data collection and a longitudinal cohort design, and will include a comparative study to assess the effectiveness of HTI relative to a similar sample of young persons who did not... total hourly cost of the study. Summary Burden Table Average annual Average 3-year Number of number...
The Use of Standardized Patients to Teach Low-Literacy Communication Skills
ERIC Educational Resources Information Center
Manning, Kimberly D.; Kripalani, Sunil
2007-01-01
Objective: To describe methods for incorporating standardized patients into health literacy training programs. Methods: We discuss aspects of program development that are relatively unique to this educational context. Results: Individuals were recruited to play the role of an average adult with limited health literacy. Methods of recruitment,…
Increased fMRI Sensitivity at Equal Data Burden Using Averaged Shifted Echo Acquisition
Witt, Suzanne T.; Warntjes, Marcel; Engström, Maria
2016-01-01
There is growing evidence as to the benefits of collecting BOLD fMRI data with increased sampling rates. However, many of the newly developed acquisition techniques developed to collect BOLD data with ultra-short TRs require hardware, software, and non-standard analytic pipelines that may not be accessible to all researchers. We propose to incorporate the method of shifted echo into a standard multi-slice, gradient echo EPI sequence to achieve a higher sampling rate with a TR of <1 s with acceptable spatial resolution. We further propose to incorporate temporal averaging of consecutively acquired EPI volumes to both ameliorate the reduced temporal signal-to-noise inherent in ultra-fast EPI sequences and reduce the data burden. BOLD data were collected from 11 healthy subjects performing a simple, event-related visual-motor task with four different EPI sequences: (1) reference EPI sequence with TR = 1440 ms, (2) shifted echo EPI sequence with TR = 700 ms, (3) shifted echo EPI sequence with every two consecutively acquired EPI volumes averaged and effective TR = 1400 ms, and (4) shifted echo EPI sequence with every four consecutively acquired EPI volumes averaged and effective TR = 2800 ms. Both the temporally averaged sequences exhibited increased temporal signal-to-noise over the shifted echo EPI sequence. The shifted echo sequence with every two EPI volumes averaged also had significantly increased BOLD signal change compared with the other three sequences, while the shifted echo sequence with every four EPI volumes averaged had significantly decreased BOLD signal change compared with the other three sequences. The results indicated that incorporating the method of shifted echo into a standard multi-slice EPI sequence is a viable method for achieving increased sampling rate for collecting event-related BOLD data. Further, consecutively averaging every two consecutively acquired EPI volumes significantly increased the measured BOLD signal change and the subsequently calculated activation map statistics. PMID:27932947
Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael
2013-02-01
The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.
NASA Astrophysics Data System (ADS)
Scarfone, A. M.; Matsuzoe, H.; Wada, T.
2016-09-01
We show the robustness of the structure of Legendre transform in thermodynamics against the replacement of the standard linear average with the Kolmogorov-Nagumo nonlinear average to evaluate the expectation values of the macroscopic physical observables. The consequence of this statement is twofold: 1) the relationships between the expectation values and the corresponding Lagrange multipliers still hold in the present formalism; 2) the universality of the Gibbs equation as well as other thermodynamic relations are unaffected by the structure of the average used in the theory.
Wyschkon, Anne; Schulz, Franziska; Gallit, Finja Sunnyi; Poltz, Nadine; Kohn, Juliane; Moraske, Svenja; Bondü, Rebecca; von Aster, Michael; Esser, Günter
2018-03-01
The study examines the 5-year course of children with dyslexia with regard to their sex. Furthermore, the study investigates the impact of dyslexia on the performance in reading and spelling skills and school-related success. A group of 995 6- to 16-year-olds were examined at the initial assessment. Part of the initial sample was then re-examined after 43 and 63 months. The diagnosis of dyslexia was based on the double discrepancy criterion using a standard deviation of 1.5. Though they had no intellectual deficits, the children showed a considerable discrepancy between their reading or writing abilities and (1) their nonverbal intelligence and (2) the mean of their grade norm. Nearly 70 % of those examined had a persisting diagnosis of dyslexia over a period of 63 months. The 5-year course was not influenced by sex. Despite average intelligence, the performance in writing and spelling of children suffering from dyslexia was one standard deviation below a control group without dyslexia with average intelligence and 0.5 standard deviations below a group of children suffering from intellectual deficits. Furthermore, the school-related success of the dyslexics was significantly lower than those of children with average intelligence. Dyslexics showed similar school-related success rates to children suffering from intellectual deficits. Dyslexia represents a considerable developmental risk. The adverse impact of dyslexia on school-related success supports the importance of early diagnostics and intervention. It also underlines the need for reliable and general accepted diagnostic criteria. It is important to define such criteria in light of the prevalence rates.
Pugsley, Haley R.; Swearingen, Kristian E.; Dovichi, Norman J.
2009-01-01
A number of algorithms have been developed to correct for migration time drift in capillary electrophoresis. Those algorithms require identification of common components in each run. However, not all components may be present or resolved in separations of complex samples, which can confound attempts for alignment. This paper reports the use of fluorescein thiocarbamyl derivatives of amino acids as internal standards for alignment of 3-(2-furoyl)quinoline-2-carboxaldehyde (FQ)-labeled proteins in capillary sieving electrophoresis. The fluorescein thiocarbamyl derivative of aspartic acid migrates before FQ-labeled proteins and the fluorescein thiocarbamyl derivative of arginine migrates after the FQ-labeled proteins. These compounds were used as internal standards to correct for variations in migration time over a two-week period in the separation of a cellular homogenate. The experimental conditions were deliberately manipulated by varying electric field and sample preparation conditions. Three components of the homogenate were used to evaluate the alignment efficiency. Before alignment, the average relative standard deviation in migration time for these components was 13.3%. After alignment, the average relative standard deviation in migration time for these components was reduced to 0.5%. PMID:19249052
Shear-stress fluctuations and relaxation in polymer glasses
NASA Astrophysics Data System (ADS)
Kriuchevskyi, I.; Wittmer, J. P.; Meyer, H.; Benzerara, O.; Baschnagel, J.
2018-01-01
We investigate by means of molecular dynamics simulation a coarse-grained polymer glass model focusing on (quasistatic and dynamical) shear-stress fluctuations as a function of temperature T and sampling time Δ t . The linear response is characterized using (ensemble-averaged) expectation values of the contributions (time averaged for each shear plane) to the stress-fluctuation relation μsf for the shear modulus and the shear-stress relaxation modulus G (t ) . Using 100 independent configurations, we pay attention to the respective standard deviations. While the ensemble-averaged modulus μsf(T ) decreases continuously with increasing T for all Δ t sampled, its standard deviation δ μsf(T ) is nonmonotonic with a striking peak at the glass transition. The question of whether the shear modulus is continuous or has a jump singularity at the glass transition is thus ill posed. Confirming the effective time-translational invariance of our systems, the Δ t dependence of μsf and related quantities can be understood using a weighted integral over G (t ) .
ERIC Educational Resources Information Center
Zhang, Zhidong; Telese, James
2012-01-01
In this article, we report the regression relations between preservice teachers' academic characteristics and their performance on the Texas Examination of Educator Standards. These academic characteristics include grade point average, reading ability, and critical thinking. The studies indicate that the critical thinking was the best predictor…
Neighbors, Charles J; Barnett, Nancy P; Rohsenow, Damaris J; Colby, Suzanne M; Monti, Peter M
2010-05-01
Brief interventions in the emergency department targeting risk-taking youth show promise to reduce alcohol-related injury. This study models the cost-effectiveness of a motivational interviewing-based intervention relative to brief advice to stop alcohol-related risk behaviors (standard care). Average cost-effectiveness ratios were compared between conditions. In addition, a cost-utility analysis examined the incremental cost of motivational interviewing per quality-adjusted life year gained. Microcosting methods were used to estimate marginal costs of motivational interviewing and standard care as well as two methods of patient screening: standard emergency-department staff questioning and proactive outreach by counseling staff. Average cost-effectiveness ratios were computed for drinking and driving, injuries, vehicular citations, and negative social consequences. Using estimates of the marginal effect of motivational interviewing in reducing drinking and driving, estimates of traffic fatality risk from drinking-and-driving youth, and national life tables, the societal costs per quality-adjusted life year saved by motivational interviewing relative to standard care were also estimated. Alcohol-attributable traffic fatality risks were estimated using national databases. Intervention costs per participant were $81 for standard care, $170 for motivational interviewing with standard screening, and $173 for motivational interviewing with proactive screening. The cost-effectiveness ratios for motivational interviewing were more favorable than standard care across all study outcomes and better for men than women. The societal cost per quality-adjusted life year of motivational interviewing was $8,795. Sensitivity analyses indicated that results were robust in terms of variability in parameter estimates. This brief intervention represents a good societal investment compared with other commonly adopted medical interventions.
40 CFR 464.34 - New source performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-continuous dischargers, annual average mass standards and maximum day and maximum for monthly average concentration (mg/l) standards shall apply. Concentration standards and annual average mass standards shall only... 40 Protection of Environment 31 2012-07-01 2012-07-01 false New source performance standards. 464...
40 CFR 464.34 - New source performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
...-continuous dischargers, annual average mass standards and maximum day and maximum for monthly average concentration (mg/l) standards shall apply. Concentration standards and annual average mass standards shall only... 40 Protection of Environment 30 2014-07-01 2014-07-01 false New source performance standards. 464...
40 CFR 464.34 - New source performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-continuous dischargers, annual average mass standards and maximum day and maximum for monthly average concentration (mg/l) standards shall apply. Concentration standards and annual average mass standards shall only... 40 Protection of Environment 31 2013-07-01 2013-07-01 false New source performance standards. 464...
The Objective Borderline Method: A Probabilistic Method for Standard Setting
ERIC Educational Resources Information Center
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim
2015-01-01
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
ERIC Educational Resources Information Center
Ziomek, Robert L.; Wright, Benjamin D.
Techniques such as the norm-referenced and average score techniques, commonly used in the identification of educationally disadvantaged students, are critiqued. This study applied latent trait theory, specifically the Rasch Model, along with teacher judgments relative to the mastery of instructional/test decisions, to derive a standard setting…
McCall, Robert B; Muhamedrahimov, Rifkat J; Groark, Christina J; Palmov, Oleg I; Nikiforova, Natalia V; Salaway, Jennifer; Julian, Megan M
2016-02-01
A total of 149 children, who spent an average of 13.8 months in Russian institutions, were transferred to Russian families of relatives and nonrelatives at an average age of 24.7 months. After residing in these families for at least 1 year (average = 43.2 months), parents reported on their attachment, indiscriminately friendly behavior, social-emotional competencies, problem behaviors, and effortful control when they were 1.5-10.7 years of age. They were compared to a sample of 83 Russian parents of noninstitutionalized children, whom they had reared from birth. Generally, institutionalized children were rated similarly to parent-reared children on most measures, consistent with substantial catch-up growth typically displayed by children after transitioning to families. However, institutionalized children were rated more poorly than parent-reared children on certain competencies in early childhood and some attentional skills. There were relatively few systematic differences associated with age at family placement or whether the families were relatives or nonrelatives. Russian parent-reared children were rated as having more problem behaviors than the US standardization sample, which raises cautions about using standards cross-culturally.
Pandit, Jaideep J; Dexter, Franklin
2009-06-01
At multiple facilities including some in the United Kingdom's National Health Service, the following are features of many surgical-anesthetic teams: i) there is sufficient workload for each operating room (OR) list to almost always be fully scheduled; ii) the workdays are organized such that a single surgeon is assigned to each block of time (usually 8 h); iii) one team is assigned per block; and iv) hardly ever would a team "split" to do cases in more than one OR simultaneously. We used Monte-Carlo simulation using normal and Weibull distributions to estimate the times to complete lists of cases scheduled into such 8 h sessions. For each combination of mean and standard deviation, inefficiencies of use of OR time were determined for 10 h versus 8 h of staffing. When the mean actual hours of OR time used averages < or = 8 h 25 min, 8 h of staffing has higher OR efficiency than 10 h for all combinations of standard deviation and relative cost of over-run to under-run. When mean > or = 8 h 50 min, 10 h staffing has higher OR efficiency. For 8 h 25 min < mean < 8 h 50 min, the economic break-even point depends on conditions. For example, break-even is: (a) 8 h 27 min for Weibull, standard deviation of 60 min and relative cost of over-run to under-run of 2.0 versus (b) 8 h 48 min for normal, standard deviation of 0 min and relative cost ratio of 1.50. Although the simplest decision rule would be to staff for 8 h if the mean workload is < or = 8 h 40 min and to staff for 10 h otherwise, performance was poor. For example, for the Weibull distribution with mean 8 h 40 min, standard deviation 60 min, and relative cost ratio of 2.00, the inefficiency of use of OR time would be 34% larger if staffing were planned for 8 h instead of 10 h. For surgical teams with 8 h sessions, use the following decision rule for anesthesiology and OR nurse staffing. If actual hours of OR time used averages < or = 8 h 25 min, plan 8 h staffing. If average > or = 8 h 50 min, plan 10 h staffing. For averages in between, perform the full analysis of McIntosh et al. (Anesth Analg 2006;103:1499-516).
Intelligent Distributed Systems
2015-10-23
periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as
Lopez, M.A.; Giovannelli, R.F.
1984-01-01
Rainfall, runoff, and water quality data were collected at nine urban watersheds in the Tampa Bay area from 1975 to 1980. Watershed drainage area ranged from 0.34 to 0.45 sq mi. Land use was mixed. Development ranged from a mostly residential watershed with a 19% impervious surface, to a commercial-residential watershed with a 61% impervious surface. Average biochemical oxygen demand concentrations of base flow at two sites and of stormwater runoff at five sites exceeded treated sewage effluent standards. Average coliform concentrations of stormwater runoff at all sites were several orders of magnitude greater than standards for Florida Class III receiving water (for recreation or propagation and management of fish and wildlife). Average concentrations of lead and zinc in stormwater runoff were consistently higher than Class III standards. Stormwater-runoff loads and base-flow concentrations of biochemical oxygen demand, chemical oxygen demand, total nitrogen, total organic nitrogen, total phosphorus, and lead were related to runoff volume, land use, urban development, and antecedent daily rainfall by multiple linear regression. Stormwater-runoff volume was related to pervious area, hydraulically connected impervious surfaces, storm rainfall, and soil-infiltration index. Base-flow daily discharge was related to drainage area and antecedent daily rainfall. The flow regression equations of this report were used to compute 1979 water-year loads of biochemical oxygen demand, chemical oxygen demand, total nitrogen, total organic nitrogen, total phosphorus , and total lead for the nine Tampa Bay area urban watersheds. (Lantz-PTT)
Code of Federal Regulations, 2010 CFR
2010-07-01
... a daily maximum hourly average ozone measurement that is greater than the level of the standard... determining the expected number of annual exceedances relate to accounting for incomplete sampling. In general... measurement. In some cases, a measurement might actually have been missed but in other cases no measurement...
Garbarino, J.R.; Taylor, Howard E.
1996-01-01
An inductively coupled plasma-mass spectrometry method was developed for the determination of dissolved Al, As, B, Ba, Be, Cd, Co, Cr, Cu, Li, Mn, Mo, Ni, Pb, Sr, Tl, U, V, and Zn in natural waters. Detection limits are generally in the 50-100 picogram per milliliter (pg/mL) range, with the exception of As which is in the 1 microgram per liter (ug/L) range. Interferences associated with spectral overlap from concomitant isotopes or molecular ions and sample matrix composition have been identified. Procedures for interference correction and reduction related to isotope selection, instrumental operating conditions, and mathematical data processing techniques are described. Internal standards are used to minimize instrumental drift. The average analytical precision attainable for 5 times the detection limit is about 16 percent. The accuracy of the method was tested using a series of U.S. Geological Survey Standard Reference Water Standards (SWRS), National Research Council Canada Riverine Water Standard, and National Institute of Standards and Technology (NIST) Trace Elements in Water Standards. Average accuracies range from 90 to 110 percent of the published mean values.
Intra- and Interobserver Variability of Cochlear Length Measurements in Clinical CT.
Iyaniwura, John E; Elfarnawany, Mai; Riyahi-Alam, Sadegh; Sharma, Manas; Kassam, Zahra; Bureau, Yves; Parnes, Lorne S; Ladak, Hanif M; Agrawal, Sumit K
2017-07-01
The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. Interobserver variability was good (average absolute difference: 0.77 ± 0.42 mm) using standard views and fair (average absolute difference: 0.90 ± 0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ± 0.09 mm for the standard views and 0.38 ± 0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.
NASA Astrophysics Data System (ADS)
Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.
2015-10-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
NASA Astrophysics Data System (ADS)
Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.
2016-03-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
Rhoderick, George C
2007-04-01
New US federal low-level automobile emission requirements, for example zero-level-emission vehicle (ZLEV), for hydrocarbons and other species, have resulted in the need by manufacturers for new certified reference materials. The new emission requirement for hydrocarbons requires the use, by automobile manufacturing testing facilities, of a 100 nmol mol(-1) propane in air gas standard. Emission-measurement instruments are required, by federal law, to be calibrated with National Institute of Standards and Technology (NIST) traceable reference materials. Because a NIST standard reference material (SRM) containing 100 nmol mol(-1) propane was not available, the US Environmental Protection Agency (EPA) and the Automobile Industry/Government Emissions Research Consortium (AIGER) requested that NIST develop such an SRM. A cylinder lot of 30 gas mixtures containing 100 nmol mol(-1) propane in air was prepared in 6-L aluminium gas cylinders by a specialty gas company and delivered to the Gas Metrology Group at NIST. Another mixture, contained in a 30-L aluminium cylinder and included in the lot, was used as a lot standard (LS). Using gas chromatography with flame-ionization detection all 30 samples were compared to the LS to obtain the average of six peak-area ratios to the LS for each sample with standard deviations of <0.31%. The average sample-to-LS ratio determinations resulted in a range of 0.9828 to 0.9888, a spread of 0.0060, which corresponds to a relative standard deviation of 0.15% of the average for all 30 samples. NIST developed its first set of five propane in air primary gravimetric standards covering a concentration range 91 to 103 nmol mol(-1) with relative uncertainties of 0.15%. This new suite of propane gravimetric standards was used to analyze and assign a concentration value to the SRM LS. On the basis of these data each SRM sample was individually certified, furnishing the desired relative expanded uncertainty of +/-0.5%. Because automobile companies use total hydrocarbons to make their measurements, it was also vital to assign a methane concentration to the SRM samples. Some of the SRM samples were analyzed and found to contain 1.2 nmol mol(-1) methane. Twenty-five of the samples were certified and released as SRM 2765.
Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D
2018-08-01
This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.1 Scope. This part establishes... automobiles to exempt them from the average fuel economy standards for passenger automobiles and to establish alternative average fuel economy standards for those manufacturers. ...
Code of Federal Regulations, 2014 CFR
2014-10-01
... OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.1 Scope. This part establishes... automobiles to exempt them from the average fuel economy standards for passenger automobiles and to establish alternative average fuel economy standards for those manufacturers. ...
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.2 Purpose. The purpose of this... automobiles which desire to petition the Administrator for exemption from applicable average fuel economy standards and for establishment of appropriate alternative average fuel economy standards and to give...
Code of Federal Regulations, 2014 CFR
2014-10-01
... OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.2 Purpose. The purpose of this... automobiles which desire to petition the Administrator for exemption from applicable average fuel economy standards and for establishment of appropriate alternative average fuel economy standards and to give...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 6 2014-10-01 2014-10-01 false Applicability. 531.3 Section 531.3 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.3 Applicability. This...
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen
2014-07-01
Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Methods for estimating streamflow at mountain fronts in southern New Mexico
Waltemeyer, S.D.
1994-01-01
The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.
ERIC Educational Resources Information Center
Zern, David S.
1987-01-01
Undergraduates reported anonymously their degree of religiousness, their Scholastic Aptitude Test (SAT) scores, and their grade point averages (GPAs). Found religiousness negatively related to ability, and not related to achievement. The students' capacity to maximize their potential, measured by the standard score difference between GPA and SAT,…
Updated U.S. population standard for the Veterans RAND 12-item Health Survey (VR-12).
Selim, Alfredo J; Rogers, William; Fleishman, John A; Qian, Shirley X; Fincke, Benjamin G; Rothendler, James A; Kazis, Lewis E
2009-02-01
The purpose of this project was to develop an updated U.S. population standard for the Veterans RAND 12-item Health Survey (VR-12). We used a well-defined and nationally representative sample of the U.S. population from 52,425 responses to the Medical Expenditure Panel Survey (MEPS) collected between 2000 and 2002. We applied modified regression estimates to update the non-proprietary 1990 scoring algorithms. We applied the updated standard to the Medicare Health Outcomes Survey (HOS) to compute the VR-12 physical (PCS((MEPS standard))) and mental (MCS((MEPS standard))) component summaries based on the MEPS. We compared these scores to PCS and MCS based on the 1990 U.S. population standard. Using the updated U.S. population standard, the average VR-12 PCS((MEPS standard)) and MCS((MEPS standard)) scores in the Medicare HOS were 39.82 (standard deviation [SD] = 12.2) and 50.08 (SD = 11.4), respectively. For the same Medicare HOS, the average PCS and MCS scores based on the 1990 standard were 1.40 points higher and 0.99 points lower in comparison to VR-12 PCS and MCS, respectively. Changes in the U.S. population between 1990 and today make the old standard obsolete for the VR-12, so the updated standard developed here is widely available to serve as such a contemporary standard for future applications for health-related quality of life (HRQoL) assessments.
41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?
Code of Federal Regulations, 2011 CFR
2011-01-01
... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...
41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?
Code of Federal Regulations, 2014 CFR
2014-01-01
... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...
41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?
Code of Federal Regulations, 2012 CFR
2012-01-01
... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...
41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?
Code of Federal Regulations, 2013 CFR
2013-07-01
... fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property... average fuel economy standards we must meet? (a) Yes. 49 U.S.C. 32917 and Executive Order 12375 require that each executive agency meet the fleet average fuel economy standards in place as of January 1 of...
Physical capacity of rescue personnel in the mining industry
Stewart, Ian B; McDonald, Michael D; Hunt, Andrew P; Parker, Tony W
2008-01-01
Background The mining industry has one of the highest occupational rates of serious injury and fatality. Mine staff involved with rescue operations are often required to respond to physically challenging situations. This paper describes the physical attributes of mining rescue personnel. Methods 91 rescue personnel (34 ± 8.6 yrs, 1.79 ± 0.07 m, 90 ± 15.0 kg) participating in the Queensland Mines Rescue Challenge completed a series of health-related and rescue-related fitness tasks. Health-related tasks comprised measurements of aerobic capacity (VO2max), abdominal endurance, abdominal strength, flexibility, lower back strength, leg strength, elbow flexion strength, shoulder strength, lower back endurance, and leg endurance. Rescue-related tasks comprised an incremental carry (IC), coal shovel (CS), and a hose drag (HD), completed in this order. Results Cardiovascular (VO2max) and muscular endurance was average or below average compared with the general population. Isometric strength did not decline with age. The rescue-related tasks were all extremely demanding with heart rate responses averaging greater than 88% of age predicted maximal heart rates. Heart rate recovery responses were more discriminating than heart rates recorded during the tasks, indicating the hose drag as the most physically demanding of the tasks. Conclusion Relying on actual rescues or mining related work to provide adequate training is generally insufficient to maintain, let alone increase, physical fitness. It is therefore recommended that standards of required physical fitness be developed and mines rescue personnel undergo regularly training (and assessment) in order to maintain these standards. PMID:18847510
Li, Xiongwei; Wang, Zhe; Fu, Yangting; Li, Zheng; Liu, Jianmin; Ni, Weidou
2014-01-01
Measurement of coal carbon content using laser-induced breakdown spectroscopy (LIBS) is limited by its low precision and accuracy. A modified spectrum standardization method was proposed to achieve both reproducible and accurate results for the quantitative analysis of carbon content in coal using LIBS. The proposed method used the molecular emissions of diatomic carbon (C2) and cyanide (CN) to compensate for the diminution of atomic carbon emissions in high volatile content coal samples caused by matrix effect. The compensated carbon line intensities were further converted into an assumed standard state with standard plasma temperature, electron number density, and total number density of carbon, under which the carbon line intensity is proportional to its concentration in the coal samples. To obtain better compensation for fluctuations of total carbon number density, the segmental spectral area was used and an iterative algorithm was applied that is different from our previous spectrum standardization calculations. The modified spectrum standardization model was applied to the measurement of carbon content in 24 bituminous coal samples. The results demonstrate that the proposed method has superior performance over the generally applied normalization methods. The average relative standard deviation was 3.21%, the coefficient of determination was 0.90, the root mean square error of prediction was 2.24%, and the average maximum relative error for the modified model was 12.18%, showing an overall improvement over the corresponding values for the normalization with segmental spectrum area, 6.00%, 0.75, 3.77%, and 15.40%, respectively.
Standardized versus custom parenteral nutrition: impact on clinical and cost-related outcomes.
Blanchette, Lisa M; Huiras, Paul; Papadopoulos, Stella
2014-01-15
Results of a study comparing clinical and cost outcomes with the use of standardized versus custom-prepared parenteral nutrition (PN) in an acute care setting are reported. In a retrospective pre-post analysis, nutritional target attainment, electrolyte abnormalities, and other outcomes were compared in patients 15 years of age or older who received custom PN (n = 49) or a standardized PN product (n = 57) for at least 72 hours at a large medical center over a 13-month period; overall, 45% of the cases were intensive care unit (ICU) admissions. A time-and-motion assessment was conducted to determine PN preparation times. There were no significant between-group differences in the percentage of patients who achieved estimated caloric requirements or in mean ICU or hospital length of stay. However, patients who received standardized PN were significantly less likely than those who received custom PN to achieve the highest protein intake goal (63% versus 92%, p = 0.003) and more likely to develop hyponatremia (37% versus 14%, p = 0.01). Pharmacy preparation times averaged 20 minutes for standardized PN and 80 minutes for custom PN; unit costs were $61.06 and $57.84, respectively. A standardized PN formulation was as effective as custom PN in achieving estimated caloric requirements, but it was relatively less effective in achieving 90% of estimated protein requirements and was associated with a higher frequency of hyponatremia. The standardized PN product may be a cost-effective formulation for institutions preparing an average of five or fewer PN orders per day.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 6 2014-10-01 2014-10-01 false Applicability. 525.3 Section 525.3 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.3 Applicability. This part...
49 CFR 525.12 - Public inspection of information.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 6 2014-10-01 2014-10-01 false Public inspection of information. 525.12 Section 525.12 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS...
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the fleet average fuel economy standards in Table I, expressed in miles per... passenger automobile fleet shall comply with the fleet average fuel economy level calculated for that model...
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the fleet average fuel economy standards in Table I, expressed in miles per... passenger automobile fleet shall comply with the fleet average fuel economy level calculated for that model...
The devil is in the details: maximizing revenue for daily trauma care.
Barnes, Stephen L; Robinson, Bryce R H; Richards, J Taliesin; Zimmerman, Cindy E; Pritts, Tim A; Tsuei, Betty J; Butler, Karyn L; Muskat, Peter C; Davis, Kenneth; Johannigman, Jay A
2008-10-01
Falling reimbursement rates for trauma care demand a concerted effort of charge capture for the fiscal survival of trauma surgeons. We compared current procedure terminology code distribution and billing patterns for Subsequent Hospital Care (SHC) before and after the institution of standardized documentation. Standardized SHC progress notes were created. The note was formulated with an emphasis on efficiency and accuracy. Documentation was completed by residents in conjunction with attendings following standard guidelines of linkage. Year-to-year patient volume, length of stay (LOS), injury severity, bills submitted, coding of service, work relative value units (wRVUs), revenue stream, and collection rate were compared with and without standardized documentation. A 394% average revenue increase was observed with the standardization of SHC documentation. Submitted charges more than doubled in the first year despite a 14% reduction in admissions and no change in length of stay. Significant increases in level II and level III billing and billing volume (P < .05) were sustainable year to year and resulted in an average per patient admission SHC income increase from $91.85 to $362.31. Use of a standardized daily progress note dramatically increases the accuracy of coding and associated billing of subsequent hospital care for trauma services.
Toprak, Ibrahim; Yaylalı, Volkan; Yildirim, Cem
2017-01-01
To assess diagnostic consistency and relation between spectral-domain optical coherence tomography (SD-OCT) and standard automated perimetry (SAP) in patients with primary open-angle glaucoma (POAG). This retrospective study comprised 51 eyes of 51 patients with a confirmed diagnosis of POAG. The qualitative and quantitative SD-OCT parameters (retinal nerve fiber layer thicknesses [RNFL; average, superior, inferior, nasal and temporal], RNFL symmetry, rim area, disc area, average and vertical cup/disc [C/D] ratio and cup volume) were compared with parameters of SAP (mean deviation, pattern standard deviation, visual field index, and glaucoma hemifield test reports). Fifty-one eyes of 51 patients with POAG were recruited. Twenty-nine eyes (56.9%) had consistent RNFL and visual field (VF) damage. However, nine patients (17.6%) showed isolated RNFL damage on SD-OCT and 13 patients (25.5%) had abnormal VF test with normal RNFL. In patients with VF defect, age, average C/D ratio, vertical C/D ratio, and cup volume were significantly higher and rim area was lower when compared to those of the patients with normal VF. In addition to these parameters, worsening in average, superior, inferior, and temporal RNFL thicknesses and RNFL symmetry was significantly associated with consistent SD-OCT and SAP outcomes. In routine practice, patients with POAG can be manifested with inconsistent reports between SD-OCT and SAP. An older age, higher C/D ratio, larger cup volume, and lower rim area on SD-OCT appears to be associated with detectable VF damage. Moreover, additional worsening in RNFL parameters might reinforce diagnostic consistency between SD-OCT and SAP.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
[Determination of acetochlor and oxyfluorfen by capillary gas chromatography].
Xiang, Wen-Sheng; Wang, Xiang-Jing; Wang, Jing; Wang, Qing
2002-09-01
A method is described for the determination of acetochlor and oxyfluorfen by capillary gas chromatography with FID and an SE-30 capillary column (60 m x 0.53 mm i. d., 1.5 microm), using dibutyl phthalate as the internal standard. The standard deviations for acetochlor and oxyfluorfen concentration(mass fraction) were 0.44% and 0.47% respectively. The relative standard deviations for acetochlor and oxyfluorfen were 0.79% and 0.88% and the average recoveries for acetochlor and oxyfluorfen were 99.3% and 101.1% respectively. The method is simple, rapid and accurate.
Gröbner, Julian; Rembges, Diana; Bais, Alkiviadis F; Blumthaler, Mario; Cabot, Thierry; Josefsson, Weine; Koskela, Tapani; Thorseth, Trond M; Webb, Ann R; Wester, Ulf
2002-07-20
A program for quality assurance of reference standards has been initiated among nine solar-UV monitoring laboratories. By means of a traveling lamp package that comprises several 1000-W ANSI code DXW-type quartz-halogen lamps, a 0.1-ohm shunt, and a 6-1/2 digit voltmeter, the irradiance scales used by the nine laboratories were compared with one another; a relative uncertainty of 1.2% was found. The comparison of 15 reference standards yielded differences of as much as 9%; the average difference was less than 3%.
49 CFR 525.9 - Duration of exemption.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 6 2014-10-01 2014-10-01 false Duration of exemption. 525.9 Section 525.9 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.9 Duration of...
49 CFR 525.10 - Renewal of exemption.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 6 2014-10-01 2014-10-01 false Renewal of exemption. 525.10 Section 525.10 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.10 Renewal of...
49 CFR 525.5 - Limitation on eligibility.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 6 2014-10-01 2014-10-01 false Limitation on eligibility. 525.5 Section 525.5 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.5 Limitation on...
Wu, Ping-gu; Ma, Bing-jie; Wang, Li-yuan; Shen, Xiang-hong; Zhang, Jing; Tan, Ying; Jiang, Wei
2013-11-01
To establish the method of simultaneous determination of methylcarbamate (MC) and ethylcarbamate (EC) in yellow rice wine by gas chromatography-mass spectrometry (GC/MS). MC and EC in yellow rice wine were derived by 9-xanthydrol, and then the derivants were detected by GC/MS; and quantitatively analyzed by D5-EC isotope internal standard method. The linearity of MC and EC ranged from 2.0 µg/L to 400.0 µg/L, with correlation coefficients at 0.998 and 0.999, respectively. The limits of quantitation (LOQ) and detection (LOD) were 0.67 and 2.0 µg/kg. When MC and EC were added in yellow rice wine at the range of 2.0-300.0 µg/kg, the intraday average recovery rate was 78.8%-102.3%, relative standard deviation was 3.2%-11.6%; interday average recovery rate was 75.4%-101.3%, relative standard deviation was 3.8%-13.4%. 20 samples of yellow rice wine from supermarket were detected using this method, the contents of MC were in the range of ND (no detected) to 1.2 µg/kg, the detection rate was 6% (3/20), the contents of EC in the range of 18.6 µg/kg to 432.3 µg/kg, with the average level at 135.2 µg/kg. The method is simple, rapid and useful for simultaneous determination of MC and EC in yellow rice wine.
40 CFR 464.44 - New source performance standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 29 2010-07-01 2010-07-01 false New source performance standards. 464...
40 CFR 464.14 - New source performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 31 2013-07-01 2013-07-01 false New source performance standards. 464...
40 CFR 464.14 - New source performance standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 29 2010-07-01 2010-07-01 false New source performance standards. 464...
40 CFR 464.44 - New source performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 30 2014-07-01 2014-07-01 false New source performance standards. 464...
40 CFR 464.24 - New source performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 31 2013-07-01 2013-07-01 false New source performance standards. 464...
40 CFR 464.24 - New source performance standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 30 2011-07-01 2011-07-01 false New source performance standards. 464...
40 CFR 464.24 - New source performance standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 29 2010-07-01 2010-07-01 false New source performance standards. 464...
40 CFR 464.44 - New source performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 31 2013-07-01 2013-07-01 false New source performance standards. 464...
40 CFR 464.44 - New source performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 31 2012-07-01 2012-07-01 false New source performance standards. 464...
40 CFR 464.24 - New source performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 30 2014-07-01 2014-07-01 false New source performance standards. 464...
40 CFR 464.44 - New source performance standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 30 2011-07-01 2011-07-01 false New source performance standards. 464...
40 CFR 464.24 - New source performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 31 2012-07-01 2012-07-01 false New source performance standards. 464...
40 CFR 464.14 - New source performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 30 2014-07-01 2014-07-01 false New source performance standards. 464...
40 CFR 464.14 - New source performance standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 30 2011-07-01 2011-07-01 false New source performance standards. 464...
40 CFR 464.14 - New source performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., total phenols, oil and grease, and TSS. For non-continuous dischargers, annual average mass standards.... Concentration standards and annual average mass standards shall only apply to non-continuous dischargers. (a... 40 Protection of Environment 31 2012-07-01 2012-07-01 false New source performance standards. 464...
Alcohol promotions in Australian supermarket catalogues.
Johnston, Robyn; Stafford, Julia; Pierce, Hannah; Daube, Mike
2017-07-01
In Australia, most alcohol is sold as packaged liquor from off-premises retailers, a market increasingly dominated by supermarket chains. Competition between retailers may encourage marketing approaches, for example, discounting, that evidence indicates contribute to alcohol-related harms. This research documented the nature and variety of promotional methods used by two major supermarket retailers to promote alcohol products in their supermarket catalogues. Weekly catalogues from the two largest Australian supermarket chains were reviewed for alcohol-related content over 12 months. Alcohol promotions were assessed for promotion type, product type, number of standard drinks, purchase price and price/standard drink. Each store catalogue included, on average, 13 alcohol promotions/week, with price-based promotions most common. Forty-five percent of promotions required the purchase of multiple alcohol items. Wine was the most frequently promoted product (44%), followed by beer (24%) and spirits (18%). Most (99%) wine cask (2-5 L container) promotions required multiple (two to three) casks to be purchased. The average number of standard drinks required to be purchased to participate in catalogue promotions was 31.7 (SD = 24.9; median = 23.1). The median price per standard drink was $1.49 (range $0.19-$9.81). Cask wines had the lowest cost per standard drink across all product types. Supermarket catalogues' emphasis on low prices/high volumes of alcohol reflects that retailers are taking advantage of limited restrictions on off-premise sales and promotion, which allow them to approach market competition in ways that may increase alcohol-related harms in consumers. Regulation of alcohol marketing should address retailer catalogue promotions. [Johnston R, Stafford J, Pierce H, Daube M. Alcohol promotions in Australian supermarket catalogues. Drug Alcohol Rev 2017;36:456-463]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
Hess, Glen W.
2002-01-01
Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Geographic Gossip: Efficient Averaging for Sensor Networks
NASA Astrophysics Data System (ADS)
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
40 CFR 413.04 - Standards for integrated facilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS ELECTROPLATING POINT SOURCE CATEGORY General Provisions § 413.04 Standards for... § 403.6(e) of EPA's General Pretreatment Regulations. In cases where electroplating process wastewaters... average standard for the electroplating wastewaters must be used. The 30 day average shall be determined...
Average variograms to guide soil sampling
NASA Astrophysics Data System (ADS)
Kerry, R.; Oliver, M. A.
2004-10-01
To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.
Manufacturer's Policies Concerning Average Fuel Economy Standards
DOT National Transportation Integrated Search
1979-01-01
The National Highway Traffic Safety Administration (NHTSA) has been given the responsibility for implementing the average fuel economy standards for passenger automobiles mandated by the Energy Policy and Conservation Act (P.L. 94-163). The standards...
Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian
2017-08-18
This work investigates a new reaction headspace gas chromatographic (HS-GC) technique for efficient quantifying average valence of manganese (Mn) in manganese oxides. This method is on the basis of the oxidation reaction between manganese oxides and sodium oxalate under the acidic condition. The carbon dioxide (CO 2 ) formed from the oxidation reaction can be quantitatively analyzed by headspace gas chromatography. The data showed that the reaction in the closed headspace vial can be completed in 20min at 80°C. The relative standard deviation of this reaction HS-GC method in the precision testing was within 1.08%, the relative differences between the new method and the reference method (titration method) were no more than 5.71%. The new HS-GC method is automated, efficient, and can be a reliable tool for the quantitative analysis of average valence of manganese in the manganese oxide related research and applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Lead-lag relationships between stock and market risk within linear response theory
NASA Astrophysics Data System (ADS)
Borysov, Stanislav; Balatsky, Alexander
2015-03-01
We study historical correlations and lead-lag relationships between individual stock risks (standard deviation of daily stock returns) and market risk (standard deviation of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over stocks, using historical stock prices from the Standard & Poor's 500 index for 1994-2013. The observed historical dynamics suggests that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining at that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when individual stock risks affect market risk and vice versa. This work was supported by VR 621-2012-2983.
Signal averaging limitations in heterodyne- and direct-detection laser remote sensing measurements
NASA Technical Reports Server (NTRS)
Menyuk, N.; Killinger, D. K.; Menyuk, C. R.
1983-01-01
The improvement in measurement uncertainty brought about by the averaging of increasing numbers of pulse return signals in both heterodyne- and direct-detection lidar systems is investigated. A theoretical analysis is presented which shows the standard deviation of the mean measurement to decrease as the inverse square root of the number of measurements, except in the presence of temporal correlation. Experimental measurements based on a dual-hybrid-TEA CO2 laser differential absorption lidar system are reported which demonstrate that the actual reduction in the standard deviation of the mean in both heterodyne- and direct-detection systems is much slower than the inverse square-root dependence predicted for uncorrelated signals, but is in agreement with predictions in the event of temporal correlation. Results thus favor the use of direct detection at relatively short range where the lower limit of the standard deviation of the mean is about 2 percent, but advantages of heterodyne detection at longer ranges are noted.
LI, FENFANG; WILKENS, LYNNE R.; NOVOTNY, RACHEL; FIALKOWSKI, MARIE K.; PAULINO, YVETTE C.; NELSON, RANDALL; BERSAMIN, ANDREA; MARTIN, URSULA; DEENIK, JONATHAN; BOUSHEY, CAROL J.
2016-01-01
Objectives Anthropometric standardization is essential to obtain reliable and comparable data from different geographical regions. The purpose of this study is to describe anthropometric standardization procedures and findings from the Children’s Healthy Living (CHL) Program, a study on childhood obesity in 11 jurisdictions in the US-Affiliated Pacific Region, including Alaska and Hawai‘i. Methods Zerfas criteria were used to compare the measurement components (height, waist, and weight) between each trainee and a single expert anthropometrist. In addition, intra- and inter-rater technical error of measurement (TEM), coefficient of reliability, and average bias relative to the expert were computed. Results From September 2012 to December 2014, 79 trainees participated in at least 1 of 29 standardization sessions. A total of 49 trainees passed either standard or alternate Zerfas criteria and were qualified to assess all three measurements in the field. Standard Zerfas criteria were difficult to achieve: only 2 of 79 trainees passed at their first training session. Intra-rater TEM estimates for the 49 trainees compared well with the expert anthropometrist. Average biases were within acceptable limits of deviation from the expert. Coefficient of reliability was above 99% for all three anthropometric components. Conclusions Standardization based on comparison with a single expert ensured the comparability of measurements from the 49 trainees who passed the criteria. The anthropometric standardization process and protocols followed by CHL resulted in 49 standardized field anthropometrists and have helped build capacity in the health workforce in the Pacific Region. PMID:26457888
40 CFR 80.1290 - How are standard benzene credits generated?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How are standard benzene credits... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1290 How are standard benzene credits generated? (a) The standard credit averaging...
40 CFR 80.1290 - How are standard benzene credits generated?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How are standard benzene credits... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1290 How are standard benzene credits generated? (a) The standard credit averaging...
40 CFR 80.1290 - How are standard benzene credits generated?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false How are standard benzene credits... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1290 How are standard benzene credits generated? (a) The standard credit averaging...
40 CFR 80.1290 - How are standard benzene credits generated?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How are standard benzene credits... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1290 How are standard benzene credits generated? (a) The standard credit averaging...
40 CFR 80.1290 - How are standard benzene credits generated?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false How are standard benzene credits... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1290 How are standard benzene credits generated? (a) The standard credit averaging...
Bulzacchelli, Maria T; Vernick, Jon S; Webster, Daniel W; Lees, Peter S J
2007-10-01
To evaluate the impact of the United States' federal Occupational Safety and Health Administration's control of hazardous energy (lockout/tagout) standard on rates of machinery-related fatal occupational injury. The standard, which took effect in 1990, requires employers in certain industries to establish an energy control program and sets minimum criteria for energy control procedures, training, inspections, and hardware. An interrupted time-series design was used to determine the standard's effect on fatality rates. Machinery-related fatalities, obtained from the National Traumatic Occupational Fatalities surveillance system for 1980 through 2001, were used as a proxy for lockout/tagout-related fatalities. Linear regression was used to control for changes in demographic and economic factors. The average annual crude rate of machinery-related fatalities in manufacturing changed little from 1980 to 1989, but declined by 4.59% per year from 1990 to 2001. However, when controlling for demographic and economic factors, the regression model estimate of the standard's effect is a small, non-significant increase of 0.05 deaths per 100 000 production worker full-time equivalents (95% CI -0.14 to 0.25). When fatality rates in comparison groups that should not have been affected by the standard are incorporated into the analysis, there is still no significant change in the rate of machinery-related fatalities in manufacturing. There is no evidence that the lockout/tagout standard decreased fatality rates relative to other trends in occupational safety over the study period. A possible explanation is voluntary use of lockout/tagout by some employers before introduction of the standard and low compliance by other employers after.
Bulzacchelli, Maria T; Vernick, Jon S; Webster, Daniel W; Lees, Peter S J
2007-01-01
Objective To evaluate the impact of the United States' federal Occupational Safety and Health Administration's control of hazardous energy (lockout/tagout) standard on rates of machinery‐related fatal occupational injury. The standard, which took effect in 1990, requires employers in certain industries to establish an energy control program and sets minimum criteria for energy control procedures, training, inspections, and hardware. Design An interrupted time‐series design was used to determine the standard's effect on fatality rates. Machinery‐related fatalities, obtained from the National Traumatic Occupational Fatalities surveillance system for 1980 through 2001, were used as a proxy for lockout/tagout‐related fatalities. Linear regression was used to control for changes in demographic and economic factors. Results The average annual crude rate of machinery‐related fatalities in manufacturing changed little from 1980 to 1989, but declined by 4.59% per year from 1990 to 2001. However, when controlling for demographic and economic factors, the regression model estimate of the standard's effect is a small, non‐significant increase of 0.05 deaths per 100 000 production worker full‐time equivalents (95% CI −0.14 to 0.25). When fatality rates in comparison groups that should not have been affected by the standard are incorporated into the analysis, there is still no significant change in the rate of machinery‐related fatalities in manufacturing. Conclusions There is no evidence that the lockout/tagout standard decreased fatality rates relative to other trends in occupational safety over the study period. A possible explanation is voluntary use of lockout/tagout by some employers before introduction of the standard and low compliance by other employers after. PMID:17916891
What Is the Minimum Information Needed to Estimate Average Treatment Effects in Education RCTs?
ERIC Educational Resources Information Center
Schochet, Peter Z.
2014-01-01
Randomized controlled trials (RCTs) are considered the "gold standard" for evaluating an intervention's effectiveness. Recently, the federal government has placed increased emphasis on the use of opportunistic experiments. A key criterion for conducting opportunistic experiments, however, is that there is relatively easy access to data…
77 FR 31574 - Executive-Led Trade Mission to South Africa and Zambia
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-29
... storage and handling [cir] Precision farming technologies Transportation Equipment and Infrastructure [cir... with a growing middle class, particularly in urban areas. Its relatively open economy has averaged more...- economy standards, South Africa continues to lag far behind in its adoption of green building practices...
Adults with reading disabilities: converting a meta-analysis to practice.
Swanson, H Lee
2012-01-01
This article reviews the results of a meta-analysis of the experimental published literature that compares the academic, cognitive, and behavioral performance of adults with reading disabilities (RD) with average achieving adult readers. The meta-analysis shows that deficits independent of the classification measures emerged for adults with RD on measures of vocabulary, math, spelling, and specific cognitive process related to naming speed, phonological processing, and verbal memory. The results also showed that adults with high verbal IQs (scores > 100) but low word recognition standard scores (< 90) yielded greater deficits related to their average reading counterparts when compared to studies that included adults with RD with verbal IQ and reading scores in the same low range. Implications of the findings related to assessment and intervention are discussed.
Evans, Travis C; Britton, Jennifer C
2018-09-01
Abnormal threat-related attention in anxiety disorders is most commonly assessed and modified using the dot-probe paradigm; however, poor psychometric properties of reaction-time measures may contribute to inconsistencies across studies. Typically, standard attention measures are derived using average reaction-times obtained in experimentally-defined conditions. However, current approaches based on experimentally-defined conditions are limited. In this study, the psychometric properties of a novel response-based computation approach to analyze dot-probe data are compared to standard measures of attention. 148 adults (19.19 ± 1.42 years, 84 women) completed a standardized dot-probe task including threatening and neutral faces. We generated both standard and response-based measures of attention bias, attentional orientation, and attentional disengagement. We compared overall internal consistency, number of trials necessary to reach internal consistency, test-retest reliability (n = 72), and criterion validity obtained using each approach. Compared to standard attention measures, response-based measures demonstrated uniformly high levels of internal consistency with relatively few trials and varying improvements in test-retest reliability. Additionally, response-based measures demonstrated specific evidence of anxiety-related associations above and beyond both standard attention measures and other confounds. Future studies are necessary to validate this approach in clinical samples. Response-based attention measures demonstrate superior psychometric properties compared to standard attention measures, which may improve the detection of anxiety-related associations and treatment-related changes in clinical samples. Copyright © 2018 Elsevier Ltd. All rights reserved.
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard
NASA Astrophysics Data System (ADS)
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than gfree with g = 2.002644 =gfree · (1 + 162ppm) with a relative uncertainty of 15ppm . This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time.
Relativity and the lead-acid battery.
Ahuja, Rajeev; Blomqvist, Andreas; Larsson, Peter; Pyykkö, Pekka; Zaleski-Ejgierd, Patryk
2011-01-07
The energies of the solid reactants in the lead-acid battery are calculated ab initio using two different basis sets at nonrelativistic, scalar-relativistic, and fully relativistic levels, and using several exchange-correlation potentials. The average calculated standard voltage is 2.13 V, compared with the experimental value of 2.11 V. All calculations agree in that 1.7-1.8 V of this standard voltage arise from relativistic effects, mainly from PbO2 but also from PbSO4.
2017-04-21
The S9 CRD/Publications and Presentations Section will route the request form to clinical investigations. 502 ISG/JAC ( Ethics Review) and Public...information. 11 . The Joint Ethics Regulation (JER) DoD S500.07-R. Standards of Conduct. provides standards of ethical conduct for all DoD personnel and...a legal ethics review to address any potential conflicts related to DoD personnel participating in non-DoD sponsored conferences, professional
Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve
2017-11-07
Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
van Dijk, Christel E; Verheij, Robert A; Swinkels, Ilse C S; Rijken, Mieke; Schellevis, François G; Groenewegen, Peter P; de Bakker, Dinny H
2011-10-01
Disease management programs (DMP) aim at improving coordination and quality of care and reducing healthcare costs for specific chronic diseases. This paper investigates to what extent total healthcare utilization of type 2 diabetes patients is actually related to diabetes and its implications for diabetes management programs. Healthcare utilization for diabetes patients was analyzed using 2008 self-reported data (n=316) and data from electronic medical records (EMR) (n=9023), and divided whether or not care was described in the Dutch type 2 diabetes multidisciplinary healthcare standard. On average 4.3 different disciplines of healthcare providers were involved in the care for diabetes patients. Ninety-six percent contacted a GP-practice and 63% an ophthalmologist, 24% an internist, 32% a physiotherapist and 23% a dietician. Diabetes patients had on average 9.3 contacts with GP-practice of which 53% were included in the healthcare standard. Only a limited part of total healthcare utilization of diabetes patients was included in the healthcare standard and therefore theoretically included in DMPs. Organizing the care for diabetics in a DMP might harm the coordination and quality of all healthcare for diabetics. DMPs should be integrated in the overall organization of care.
Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel
2014-01-01
Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150
Finger, Robert P; Porz, Gabriele; Fleckenstein, Monika; Charbel Issa, Peter; Lechtenfeld, Werner; Brohlburg, Daniela; Scholl, Hendrik P N; Holz, Frank G
2010-04-01
The purpose of this study was to establish and evaluate a nationwide telephone counseling for patients with retinal diseases hotline in Germany against the background of an increasing demand for information and counseling in the field of retina services as a result of current demographic trends. The telephone Retina Hotline was installed, advertised, and run for 1.5 years at the Department of Ophthalmology, University of Bonn, and open to callers from the whole of Germany. The hotline was staffed by ophthalmologists. Calls were handled according to standard flow charts and counsel given adhered to a list of standardized answers as appropriate in the individual case. All calls were documented in an online database, which was subsequently analyzed and used for evaluation. A total of 1,384 calls were documented leading to an average of 7.6 calls per afternoon. The average length of calls was 8.5 minutes. The majority of callers were female patients (63%) who had age-related macular degeneration. Only 17% of callers were relatives. Most callers (59%) were >60 years of age. The majority of questions were related to therapeutic options for dry or neovascular age-related macular degeneration as well as various forms of retinitis pigmentosa (45%). A service such as the Retina Hotline seems necessary and well justified against the background of need for information and support documented. However, on the basis of an adequate computer program and a standard catalog of answers or flow charts, it may not need to be staffed by ophthalmologists, but well-trained nonmedical staff may be sufficient.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of § 86.1801-12(j), CO2 fleet average exhaust emission standards apply to: (i) 2012 and later model... businesses meeting certain criteria may be exempted from the greenhouse gas emission standards in § 86.1818... standards applicable in a given model year are calculated separately for passenger automobiles and light...
Evaluation of the 235 U resonance parameters to fit the standard recommended values
Leal, Luiz; Noguere, Gilles; Paradela, Carlos; ...
2017-09-13
A great deal of effort has been dedicated to the revision of the standard values in connection with the neutron interaction for some actinides. While standard data compilation are available for decades nuclear data evaluations included in existing nuclear data libraries (ENDF, JEFF, JENDL, etc.) do not follow the standard recommended values. Indeed, the majority of evaluations for major actinides do not conform to the standards whatsoever. In particular, for the n + 235U interaction the only value in agreement with the standard is the thermal fission cross section. We performed a resonance re-evaluation of the n + 235U interactionmore » in order to address the issues regarding standard values in the energy range from 10-5 eV to 2250 eV. Recently, 235U fission cross-section measurements have been performed at the CERN Neutron Time-o-Flight facility (TOF), known as n_TOF, in the energy range from 0.7 eV to 10 keV. The data were normalized according to the recommended standard of the fission integral in the energy range 7.8 eV to 11 eV. As a result, the n_TOF averaged fission cross sections above 100 eV are in good agreement with the standard recommended values. The n_TOF data were included in the 235U resonance analysis that was performed with the code SAMMY. In addition to the average standard values related to the fission cross section, standard thermal values for fission, capture, and elastic cross sections were also included in the evaluation. Our paper presents the procedure used for re-evaluating the 235U resonance parameters including the recommended standard values as well as new cross section measurements.« less
Evaluation of the 235 U resonance parameters to fit the standard recommended values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, Luiz; Noguere, Gilles; Paradela, Carlos
A great deal of effort has been dedicated to the revision of the standard values in connection with the neutron interaction for some actinides. While standard data compilation are available for decades nuclear data evaluations included in existing nuclear data libraries (ENDF, JEFF, JENDL, etc.) do not follow the standard recommended values. Indeed, the majority of evaluations for major actinides do not conform to the standards whatsoever. In particular, for the n + 235U interaction the only value in agreement with the standard is the thermal fission cross section. We performed a resonance re-evaluation of the n + 235U interactionmore » in order to address the issues regarding standard values in the energy range from 10-5 eV to 2250 eV. Recently, 235U fission cross-section measurements have been performed at the CERN Neutron Time-o-Flight facility (TOF), known as n_TOF, in the energy range from 0.7 eV to 10 keV. The data were normalized according to the recommended standard of the fission integral in the energy range 7.8 eV to 11 eV. As a result, the n_TOF averaged fission cross sections above 100 eV are in good agreement with the standard recommended values. The n_TOF data were included in the 235U resonance analysis that was performed with the code SAMMY. In addition to the average standard values related to the fission cross section, standard thermal values for fission, capture, and elastic cross sections were also included in the evaluation. Our paper presents the procedure used for re-evaluating the 235U resonance parameters including the recommended standard values as well as new cross section measurements.« less
Evaluation of the 235U resonance parameters to fit the standard recommended values
NASA Astrophysics Data System (ADS)
Leal, Luiz; Noguere, Gilles; Paradela, Carlos; Durán, Ignacio; Tassan-Got, Laurent; Danon, Yaron; Jandel, Marian
2017-09-01
A great deal of effort has been dedicated to the revision of the standard values in connection with the neutron interaction for some actinides. While standard data compilation are available for decades nuclear data evaluations included in existing nuclear data libraries (ENDF, JEFF, JENDL, etc.) do not follow the standard recommended values. Indeed, the majority of evaluations for major actinides do not conform to the standards whatsoever. In particular, for the n + 235U interaction the only value in agreement with the standard is the thermal fission cross section. A resonance re-evaluation of the n + 235U interaction has been performed to address the issues regarding standard values in the energy range from 10-5 eV to 2250 eV. Recently, 235U fission cross-section measurements have been performed at the CERN Neutron Time-of-Flight facility (TOF), known as n_TOF, in the energy range from 0.7 eV to 10 keV. The data were normalized according to the recommended standard of the fission integral in the energy range 7.8 eV to 11 eV. As a result, the n_TOF averaged fission cross sections above 100 eV are in good agreement with the standard recommended values. The n_TOF data were included in the 235U resonance analysis that was performed with the code SAMMY. In addition to the average standard values related to the fission cross section, standard thermal values for fission, capture, and elastic cross sections were also included in the evaluation. This paper presents the procedure used for re-evaluating the 235U resonance parameters including the recommended standard values as well as new cross section measurements.
Bajpai, Jyoti; Gamnagatti, Shivanand; Kumar, Rakesh; Sreenivas, Vishnubhatla; Sharma, Mehar Chand; Khan, Shah Alam; Rastogi, Shishir; Malhotra, Arun; Safaya, Rajni; Bakhshi, Sameer
2011-04-01
Histological necrosis, the current standard for response evaluation in osteosarcoma, is attainable after neoadjuvant chemotherapy. To establish the role of surrogate markers of response prediction and evaluation using MRI in the early phases of the disease. Thirty-one treatment-naïve osteosarcoma patients received three cycles of neoadjuvant chemotherapy followed by surgery during 2006-2008. All patients underwent baseline and post-chemotherapy conventional, diffusion-weighted and dynamic contrast-enhanced MRI. Taking histological response (good response ≥90% necrosis) as the reference standard, various parameters of MRI were compared to it. A tumor was considered ellipsoidal; volume, average tumor plane and its relative value (average tumor plane relative/body surface area) was calculated using the standard formula for ellipse. Receiver operating characteristic curves were generated to assess best threshold and predictability. After deriving thresholds for each parameter in univariable analysis, multivariable analysis was carried out. Both pre-and post-chemotherapy absolute and relative-size parameters correlated well with necrosis. Apparent diffusion coefficient did not correlate with necrosis; however, on adjusting for volume, significant correlation was found. Thus, we could derive a new parameter: diffusion per unit volume. In osteosarcoma, chemotherapy response can be predicted and evaluated by conventional and diffusion-weighted MRI early in the disease course and it correlates well with necrosis. Further, newly derived parameter diffusion per unit volume appears to be a sensitive substitute for response evaluation in osteosarcoma.
40 CFR 1042.515 - Test procedures related to not-to-exceed standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...
40 CFR 1042.515 - Test procedures related to not-to-exceed standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...
40 CFR 1042.515 - Test procedures related to not-to-exceed standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...
40 CFR 1042.515 - Test procedures related to not-to-exceed standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (g) For engines equipped with emission controls that include discrete regeneration events, if a regeneration event occurs during the NTE test, the averaging period must be at least as long as the time between the events multiplied by the number of full regeneration events within the sampling period. This...
40 CFR 86.1866-12 - CO2 fleet average credit programs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... technologies designed to reduce air conditioning refrigerant leakage over the useful life of their passenger... implementing specific air conditioning system technologies designed to reduce air conditioning-related CO2... than 10% when compared to previous industry standard designs): 1.1 g/mi. (viii) Oil separator: 0.6 g/mi...
40 CFR 86.1866-12 - CO2 fleet average credit programs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... technologies designed to reduce air conditioning refrigerant leakage over the useful life of their passenger... implementing specific air conditioning system technologies designed to reduce air conditioning-related CO2... than 10% when compared to previous industry standard designs): 1.1 g/mi. (viii) Oil separator: 0.6 g/mi...
40 CFR 1036.150 - Interim provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 17, 2013, § 1036.150 was amended by revising paragraphs (d), (g)(2), and (g)(3), effective Aug. 16... certify your entire U.S.-directed production volume within that averaging set to these standards. This...'s CO2 emissions relative to its 2012 baseline level and certify it to an FCL below the applicable...
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
Brix, G; Reinl, M; Brinker, G
2001-07-01
It was the purpose of present study, to evaluate a large number of exposure-time courses measured during patient examinations in clinical routine in relation to the current IEC standard and the draft version of the revised standard and, moreover, to investigate whether there is a correlation between the subjective heat perception of the patients during the MR examination and the intensity of RF power deposition. To this end, radiofrequency exposure to 591 patients undergoing MR examinations performed on 1.5-Tesla MR systems was monitored in five clinics and evaluated in accordance with both IEC standards. For each of the 7902 sequences applied, whole body and partial body SARs were estimated on the basis of a simple patient model. Following the examinations, 149 patients were willing to provide information in a questionnaire regarding their body weight and their subjective heat perception during the examination. Although patient masses entered into the MR system were in some cases too high, reliable masses could be estimated by the SAR monitor. In relation to our data, the revision of the IEC standard results in a tightening of the restrictions, but still more than 96% of the examinations did not exceed the SAR limits recommended for the normal operating mode. For the exposure conditions examined, no statistically significant correlation was found between the subjective heat perception of the patients and the intensity of power deposition. Taking advantage of the possibility to compute running SAR averages, MR sequences can be employed in clinical practice for which SAR levels exceed the defined IEC limits, if the acquisition time is short in relation to the averaging period and energy deposition has been low previous to the applied high-power sequence.
DOT National Transportation Integrated Search
1997-04-18
Section 32902(a) of title 49, United States Code, requires the Secretary of Transportation to prescribe by regulation, at least 18 months in advance of each model year, average fuel economy standards (known as "Corporate Average Fuel Economy" or "CAF...
Shen, Jay J; Xu, Yu; Staples, Shelley; Bolstad, Anne L
2014-07-01
To assess interpersonal skills of internationally educated nurses (IEN) while interacting with standardized patients. Participants included 52 IEN at two community hospitals in the southwestern region of the USA. Standardized patients were used to create patient-nurse encounter. Seventeen items in four domains ("skills in interviewing and collecting information"; "skills in counseling and delivering information"; "rapport"; and "personal manner") in an Interpersonal Skills (IPS) instrument were measured by a Likert scale 1-4 with 4 indicating the best performance. The average composite score per domain and scores of the 17 items were compared across the domains. On 10 of the 17 items, the nurses received scores under 3. Counseling with an average score of 2.10 and closure with an average score of 2.44 in domain 2, small talk with an average score of 2.06 in domain 3, and physical exam with average score of 2.21 in domain 4 were below 2.5. The average composite score of domain 1 was 3.54, significantly higher than those of domains 2-4 (2.77, 2.81, and 2.71, respectively). Age was moderately related to the average score per domain with every 10 year increase in age resulting in a 0.1 increase in the average score. Sex and country of origin showed mixed results. The interpersonal skills of IEN in three of the four domains need improvement. Well-designed educational programs may facilitate the improvement, especially in areas of small talk, counseling, closure, and physical exam. Future research should examine relationships between the IPS and demographics factors. © 2013 The Authors. Japan Journal of Nursing Science © 2013 Japan Academy of Nursing Science.
Are greenhouse gas emissions and cognitive skills related? Cross-country evidence.
Omanbayev, Bekhzod; Salahodjaev, Raufhon; Lynn, Richard
2018-01-01
Are greenhouse gas emissions (GHG) and cognitive skills (CS) related? We attempt to answer this question by exploring this relationship, using cross-country data for 150 countries, for the period 1997-2012. After controlling for the level of economic development, quality of political regimes, population size and a number of other controls, we document that CS robustly predict GHG. In particular, when CS at a national level increase by one standard deviation, the average annual rate of air pollution changes by nearly 1.7% (slightly less than one half of a standard deviation). This significance holds for a number of robustness checks. Copyright © 2017 Elsevier Inc. All rights reserved.
National mortality rates: the impact of inequality?
Wilkinson, R G
1992-08-01
Although health is closely associated with income differences within each country there is, at best, only a weak link between national mortality rates and average income among the developed countries. On the other hand, there is evidence of a strong relationship between national mortality rates and the scale of income differences within each society. These three elements are coherent if health is affected less by changes in absolute material standards across affluent populations than it is by relative income or the scale of income differences and the resulting sense of disadvantage within each society. Rather than socioeconomic mortality differentials representing a distribution around given national average mortality rates, it is likely that the degree of income inequality indicates the burden of relative deprivation on national mortality rates.
Li, Fenfang; Wilkens, Lynne R; Novotny, Rachel; Fialkowski, Marie K; Paulino, Yvette C; Nelson, Randall; Bersamin, Andrea; Martin, Ursula; Deenik, Jonathan; Boushey, Carol J
2016-05-01
Anthropometric standardization is essential to obtain reliable and comparable data from different geographical regions. The purpose of this study is to describe anthropometric standardization procedures and findings from the Children's Healthy Living (CHL) Program, a study on childhood obesity in 11 jurisdictions in the US-Affiliated Pacific Region, including Alaska and Hawai'i. Zerfas criteria were used to compare the measurement components (height, waist, and weight) between each trainee and a single expert anthropometrist. In addition, intra- and inter-rater technical error of measurement (TEM), coefficient of reliability, and average bias relative to the expert were computed. From September 2012 to December 2014, 79 trainees participated in at least 1 of 29 standardization sessions. A total of 49 trainees passed either standard or alternate Zerfas criteria and were qualified to assess all three measurements in the field. Standard Zerfas criteria were difficult to achieve: only 2 of 79 trainees passed at their first training session. Intra-rater TEM estimates for the 49 trainees compared well with the expert anthropometrist. Average biases were within acceptable limits of deviation from the expert. Coefficient of reliability was above 99% for all three anthropometric components. Standardization based on comparison with a single expert ensured the comparability of measurements from the 49 trainees who passed the criteria. The anthropometric standardization process and protocols followed by CHL resulted in 49 standardized field anthropometrists and have helped build capacity in the health workforce in the Pacific Region. Am. J. Hum. Biol. 28:364-371, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
49 CFR 525.6 - Requirements for petition.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.6 Requirements... arguments of the petitioner supporting the exemption and alternative average fuel economy standard requested...
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the average fuel economy standards in Table I, expressed in miles per gallon, in... passenger automobile fleet shall comply with the fuel economy level calculated for that model year according...
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the average fuel economy standards in Table I, expressed in miles per gallon, in... passenger automobile fleet shall comply with the fuel economy level calculated for that model year according...
Deering, Sean; Liu, Lin; Zamora, Tania; Hamilton, Joanne; Stepnowsky, Carl
2017-12-15
Obstructive sleep apnea (OSA) is a widespread condition that adversely affects physical health and cognitive functioning. The prevailing treatment for OSA is continuous positive airway pressure (CPAP), but therapeutic benefits are dependent on consistent use. Our goal was to investigate the relationship between CPAP adherence and measures of sustained attention in patients with OSA. Our hypothesis was that the Psychomotor Vigilance Task (PVT) would be sensitive to attention-related improvements resulting from CPAP use. This study was a secondary analysis of a larger clinical trial. Treatment adherence was determined from CPAP use data. Validated sleep-related questionnaires and a sustained-attention and alertness test (PVT) were administered to participants at baseline and at the 6-month time point. Over a 6-month time period, the average CPAP adherence was 3.32 h/night (standard deviation [SD] = 2.53), average improvement in PVT minor lapses was -4.77 (SD = 13.2), and average improvement in PVT reaction time was -73.1 milliseconds (standard deviation = 211). Multiple linear regression analysis showed that higher CPAP adherence was significantly associated with a greater reduction in minor lapses in attention after 6 months of continuous treatment with CPAP therapy (β = -0.72, standard error = 0.34, P = .037). The results of this study showed that higher levels of CPAP adherence were associated with significant improvements in vigilance. Because the PVT is a performance-based measure that is not influenced by prior learning and is not subjective, it may be an important supplement to patient self-reported assessments. Name: Effect of Self-Management on Improving Sleep Apnea Outcomes, URL: https://clinicaltrials.gov/ct2/show/NCT00310310, Identifier: NCT00310310. © 2017 American Academy of Sleep Medicine
Cost-effectiveness of the Federal stream-gaging program in Virginia
Carpenter, D.H.
1985-01-01
Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
49 CFR 525.8 - Processing of petitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.8 Processing of... establishment of an alternative average fuel economy standard, or the proposed denial of the petition, specifies... fuel economy standard or the denial of the petition, and the reasons for the decision. (Sec. 301, Pub...
40 CFR 63.5710 - How do I demonstrate compliance using emissions averaging?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants for Boat Manufacturing Standards for Open... section to compute the weighted-average MACT model point value for each open molding resin and gel coat...
NASA Astrophysics Data System (ADS)
Zou, Hai-Long; Yu, Zu-Guo; Anh, Vo; Ma, Yuan-Lin
2018-05-01
In recent years, researchers have proposed several methods to transform time series (such as those of fractional Brownian motion) into complex networks. In this paper, we construct horizontal visibility networks (HVNs) based on the -stable Lévy motion. We aim to study the relations of multifractal and Laplacian spectrum of transformed networks on the parameters and of the -stable Lévy motion. First, we employ the sandbox algorithm to compute the mass exponents and multifractal spectrum to investigate the multifractality of these HVNs. Then we perform least squares fits to find possible relations of the average fractal dimension , the average information dimension and the average correlation dimension against using several methods of model selection. We also investigate possible dependence relations of eigenvalues and energy on , calculated from the Laplacian and normalized Laplacian operators of the constructed HVNs. All of these constructions and estimates will help us to evaluate the validity and usefulness of the mappings between time series and networks, especially between time series of -stable Lévy motions and HVNs.
Selamat, Rusidah; Zain, Fuziah; Raib, Junidah; Zakaria, Rosini; Marzuki, Mohd Shaffari; Ibrahim, Taziah Fatimah
2011-12-01
To study the validity of the visual clinical assessment of weight relative to length and length relative to age as compared to the World Health Organization (WHO) 2006 standard and National Center for Health Statistics (NCHS) 1977 reference in asssessing the physical growth of children younger than 1 year. A prospective cohort study was carried out among 684 infants attending goverment health clinics in 2 states in Malaysia. Body weight, length, and clinical assessment were measured on the same day for 9 visits, scheduled every month until 6 months of age and every 2 months until 12 months of age. All of the 3 z-scores for weight for age (WAZ), length for age (HAZ), and weight for length (WHZ) were calculated using WHO Anthro for Personal Computers software. The average sensitivity and specificity for the visual clinical assessment for the detection of thinness were higher using the WHO 2006 standard as compared with using NCHS 1977. However, the overall sensitivity of the visual clinical assessment for the detection of thin and lean children was lower from 1 month of age until a year as compared with the WHO 2006 standard and NCHS 1977 reference. The positive predictive value (PPV) for the visual clinical assessment versus the WHO 2006 standard was almost doubled as compared with the PPV of visual clinical assessment versus the NCHS 1977 reference. The overall average sensitivity, specificity, PPV, and negative predictive value for the detection of stunting was higher for visual clinical assessment versus the WHO 2006 standard as compared with visual clinical assessment versus the NCHS 1977 reference. The sensitivity and specificity of visual clinical assessment for the detection of wasting and stunting among infants are better for the WHO 2006 standard than the NCHS 1977 reference.
The pathway to RCTs: how many roads are there? Examining the homogeneity of RCT justification.
Chow, Jeffrey Tin Yu; Lam, Kevin; Naeem, Abdul; Akanda, Zarique Z; Si, Francie Fengqin; Hodge, William
2017-02-02
Randomized controlled trials (RCTs) form the foundational background of modern medical practice. They are considered the highest quality of evidence, and their results help inform decisions concerning drug development and use, preventive therapies, and screening programs. However, the inputs that justify an RCT to be conducted have not been studied. We reviewed the MEDLINE and EMBASE databases across six specialties (Ophthalmology, Otorhinolaryngology (ENT), General Surgery, Psychiatry, Obstetrics-Gynecology (OB-GYN), and Internal Medicine) and randomly chose 25 RCTs from each specialty except for Otorhinolaryngology (20 studies) and Internal Medicine (28 studies). For each RCT, we recorded information relating to the justification for conducting RCTs such as average study size cited, number of studies cited, and types of studies cited. The justification varied widely both within and between specialties. For Ophthalmology and OB-GYN, the average study sizes cited were around 1100 patients, whereas they were around 500 patients for Psychiatry and General Surgery. Between specialties, the average number of studies cited ranged from around 4.5 for ENT to around 10 for Ophthalmology, but the standard deviations were large, indicating that there was even more discrepancy within each specialty. When standardizing by the sample size of the RCT, some of the discrepancies between and within specialties can be explained, but not all. On average, Ophthalmology papers cited review articles the most (2.96 studies per RCT) compared to less than 1.5 studies per RCT for all other specialties. The justifications for RCTs vary widely both within and between specialties, and the justification for conducting RCTs is not standardized.
Monakhova, Yulia B; Diehl, Bernd W K; Do, Tung X; Schulze, Margit; Witzleben, Steffen
2018-02-05
Apart from the characterization of impurities, the full characterization of heparin and low molecular weight heparin (LMWH) also requires the determination of average molecular weight, which is closely related to the pharmaceutical properties of anticoagulant drugs. To determine average molecular weight of these animal-derived polymer products, partial least squares regression (PLS) was utilized for modelling of diffused-ordered spectroscopy NMR data (DOSY) of a representative set of heparin (n=32) and LMWH (n=30) samples. The same sets of samples were measured by gel permeation chromatography (GPC) to obtain reference data. The application of PLS to the data led to calibration models with root mean square error of prediction of 498Da and 179Da for heparin and LMWH, respectively. The average coefficients of variation (CVs) did not exceed 2.1% excluding sample preparation (by successive measuring one solution, n=5) and 2.5% including sample preparation (by preparing and analyzing separate samples, n=5). An advantage of the method is that the sample after standard 1D NMR characterization can be used for the molecular weight determination without further manipulation. The accuracy of multivariate models is better than the previous results for other matrices employing internal standards. Therefore, DOSY experiment is recommended to be employed for the calculation of molecular weight of heparin products as a complementary measurement to standard 1D NMR quality control. The method can be easily transferred to other matrices as well. Copyright © 2017 Elsevier B.V. All rights reserved.
Barboni, Mirella Telles Salgueiro; Szepessy, Zsuzsanna; Ventura, Dora Fix; Németh, János
2018-04-01
To establish fluctuation limits, it was considered that not only overall macular sensitivity but also fluctuations of individual test points in the macula might have clinical value. Three repeated measurements of microperimetry were performed using the Standard Expert test of Macular Integrity Assessment (MAIA) in healthy subjects ( N = 12, age = 23.8 ± 1.5 years old) and in patients with age-related macular degeneration (AMD) ( N = 11, age = 68.5 ± 7.4 years old). A total of 37 macular points arranged in four concentric rings and in four quadrants were analyzed individually and in groups. The data show low fluctuation of macular sensitivity of individual test points in healthy subjects (average = 1.38 ± 0.28 dB) and AMD patients (average = 2.12 ± 0.60 dB). Lower sensitivity points are more related to higher fluctuation than to the distance from the central point. Fixation stability showed no effect on the sensitivity fluctuation. The 95th percentile of the standard deviations of healthy subjects was, on average, 2.7 dB, ranging from 1.2 to 4 dB, depending on the point tested. Point analysis and regional analysis might be considered prior to evaluating macular sensitivity fluctuation in order to distinguish between normal variation and a clinical change. S tatistical methods were used to compare repeated microperimetry measurements and to establish fluctuation limits of the macular sensitivity. This analysis could add information regarding the integrity of different macular areas and provide new insights into fixation points prior to the biofeedback fixation training.
Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska
Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.
1999-01-01
Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more data from existing stations, probably would produce the greatest reduction in average sampling errors of the equations.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Illinois
Mades, D.M.; Oberg, K.A.
1984-01-01
Data uses and funding sources were identified for 138 continuous-record discharge-gaging stations currently (1983) operated as part of the stream-gaging program in Illinois. Streamflow data from five of those stations are used only for regional hydrology studies. Most streamflow data are used for defining regional hydrology, defining rainfall-runoff relations, flood forecasting, regulating navigation systems, and water-quality sampling. Based on the evaluations of data use and of alternative methods for determining streamflow in place of stream gaging, no stations in the 1983 stream-gaging program should be deactivated. The current budget (in 1983 dollars) for operating the 138-station program is $768,000 per year. The average standard error of instantaneous discharge for the current practice for visiting the gaging stations is 36.5 percent. Missing stage record accounts for one-third of the 36.5 percent average standard error. (USGS)
The study of trace metal absoption using stable isotopes and mass spectrometry
NASA Astrophysics Data System (ADS)
Fennessey, P. V.; Lloyd-Kindstrand, L.; Hambidge, K. M.
1991-12-01
The absorption and excretion of zinc stable isotopes have been followed in more than 120 human subjects. The isotope enrichment determinations were made using a standard VG 7070E HF mass spectrometer. A fast atom gun (FAB) was used to form the ions from a dry residue on a pure silver probe tip. Isotope ratio measurements were found to have a precision of better than 2% (relative standard deviation) and required a sample size of 1-5 [mu]g. The average true absorption of zinc was found to be 73 ± 12% (2[sigma]) when the metal was taken in a fasting state. This absorption figure was corrected for tracer that had been absorbed and secreted into the gastrointestinal (GI) tract over the time course of the study. The average time for a majority of the stable isotope tracer to pass through the GI tract was 4.7 ± 1.9 (2[sigma]) days.
Bekiroglu, Somer; Myrberg, Olle; Ostman, Kristina; Ek, Marianne; Arvidsson, Torbjörn; Rundlöf, Torgny; Hakkarainen, Birgit
2008-08-05
A 1H-nuclear magnetic resonance (NMR) spectroscopy method for quantitative determination of benzethonium chloride (BTC) as a constituent of grapefruit seed extract was developed. The method was validated, assessing its specificity, linearity, range, and precision, as well as accuracy, limit of quantification and robustness. The method includes quantification using an internal reference standard, 1,3,5-trimethoxybenzene, and regarded as simple, rapid, and easy to implement. A commercial grapefruit seed extract was studied and the experiments were performed on spectrometers operating at two different fields, 300 and 600 MHz for proton frequencies, the former with a broad band (BB) probe and the latter equipped with both a BB probe and a CryoProbe. The concentration average for the product sample was 78.0, 77.8 and 78.4 mg/ml using the 300 BB probe, the 600MHz BB probe and CryoProbe, respectively. The standard deviation and relative standard deviation (R.S.D., in parenthesis) for the average concentrations was 0.2 (0.3%), 0.3 (0.4%) and 0.3mg/ml (0.4%), respectively.
Design and preliminary assessment of Vanderbilt hand exoskeleton.
Gasser, Benjamin W; Bennett, Daniel A; Durrough, Christina M; Goldfarb, Michael
2017-07-01
This paper presents the design of a hand exoskeleton intended to enable or facilitate bimanual activities of daily living (ADLs) for individuals with chronic upper extremity hemiparesis resulting from stroke. The paper describes design of the battery-powered, self-contained exoskeleton and presents the results of initial testing with a single subject with hemiparesis from stroke. Specifically, an experiment was conducted requiring the subject to repeatedly remove the lid from a water bottle both with and without the hand exoskeleton. The relative times required to remove the lid from the bottles was considerably lower when using the exoskeleton. Specifically, the average amount of time required to grasp the bottle with the paretic hand without the exoskeleton was 25.9 s, with a standard deviation of 33.5 s, while the corresponding average amount of time required to grasp the bottle with the exoskeleton was 5.1 s, with a standard deviation of 1.9 s. Thus, the task time involving the paretic hand was reduced by a factor of five, while the standard deviation was reduced by a factor of 16.
Bankfull characteristics of Ohio streams and their relation to peak streamflows
Sherwood, James M.; Huitger, Carrie A.
2005-01-01
Regional curves, simple-regression equations, and multiple-regression equations were developed to estimate bankfull width, bankfull mean depth, bankfull cross-sectional area, and bankfull discharge of rural, unregulated streams in Ohio. The methods are based on geomorphic, basin, and flood-frequency data collected at 50 study sites on unregulated natural alluvial streams in Ohio, of which 40 sites are near streamflow-gaging stations. The regional curves and simple-regression equations relate the bankfull characteristics to drainage area. The multiple-regression equations relate the bankfull characteristics to drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope. Average standard errors of prediction for bankfull width equations range from 20.6 to 24.8 percent; for bankfull mean depth, 18.8 to 20.6 percent; for bankfull cross-sectional area, 25.4 to 30.6 percent; and for bankfull discharge, 27.0 to 78.7 percent. The simple-regression (drainage-area only) equations have the highest average standard errors of prediction. The multiple-regression equations in which the explanatory variables included drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope have the lowest average standard errors of prediction. Field surveys were done at each of the 50 study sites to collect the geomorphic data. Bankfull indicators were identified and evaluated, cross-section and longitudinal profiles were surveyed, and bed- and bank-material were sampled. Field data were analyzed to determine various geomorphic characteristics such as bankfull width, bankfull mean depth, bankfull cross-sectional area, bankfull discharge, streambed slope, and bed- and bank-material particle-size distribution. The various geomorphic characteristics were analyzed by means of a combination of graphical and statistical techniques. The logarithms of the annual peak discharges for the 40 gaged study sites were fit by a Pearson Type III frequency distribution to develop flood-peak discharges associated with recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The peak-frequency data were related to geomorphic, basin, and climatic variables by multiple-regression analysis. Simple-regression equations were developed to estimate 2-, 5-, 10-, 25-, 50-, and 100-year flood-peak discharges of rural, unregulated streams in Ohio from bankfull channel cross-sectional area. The average standard errors of prediction are 31.6, 32.6, 35.9, 41.5, 46.2, and 51.2 percent, respectively. The study and methods developed are intended to improve understanding of the relations between geomorphic, basin, and flood characteristics of streams in Ohio and to aid in the design of hydraulic structures, such as culverts and bridges, where stability of the stream and structure is an important element of the design criteria. The study was done in cooperation with the Ohio Department of Transportation and the U.S. Department of Transportation, Federal Highway Administration.
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.
Psychosocial Hazards in UK Universities: Adopting a Risk Assessment Approach
ERIC Educational Resources Information Center
Kinman, Gail; Court, Stephen
2010-01-01
Drawing on the findings of a recent national survey, this article examines the extent to which higher education institutions in the United Kingdom meet the minimum standards recommended by the Health and Safety Executive (HSE) for the management of work-related stressors. A comparison is also made between the average weekly working hours reported…
Noftle, Erik E; Fleeson, William
2010-03-01
In 3 intensive cross-sectional studies, age differences in behavior averages and variabilities were examined. Three questions were posed: Does variability differ among age groups? Does the sizable variability in young adulthood persist throughout the life span? Do past conclusions about trait development, based on trait questionnaires, hold up when actual behavior is examined? Three groups participated: young adults (18-23 years), middle-aged adults (35-55 years), and older adults (65-81 years). In 2 experience-sampling studies, participants reported their current behavior multiple times per day for 1- or 2-week spans. In a 3rd study, participants interacted in standardized laboratory activities on 8 occasions. First, results revealed a sizable amount of intraindividual variability in behavior for all adult groups, with average within-person standard deviations ranging from about half a point to well over 1 point on 6-point scales. Second, older adults were most variable in Openness, whereas young adults were most variable in Agreeableness and Emotional Stability. Third, most specific patterns of maturation-related age differences in actual behavior were more greatly pronounced and differently patterned than those revealed by the trait questionnaire method. When participants interacted in standardized situations, personality differences between young adults and middle-aged adults were larger, and older adults exhibited a more positive personality profile than they exhibited in their everyday lives.
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-01-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-05-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.
Falling Behind: New Evidence on the Black-White Achievement Gap
ERIC Educational Resources Information Center
Levitt, Steven D.; Fryer, Roland G.
2004-01-01
On average, black students typically score one standard deviation below white students on standardized tests--roughly the difference in performance between the average 4th grader and the average 8th grader. Historically, what has come to be known as the black-white test-score gap has emerged before children enter kindergarten and has tended to…
Kim, Ki-Hyeon; Lee, Bo-Ae; Oh, Deuk-Ja
2018-01-01
The purpose of this study is to verify the effects of aquatic exercise on the health-related physical fitness, blood fat, and immune functions of children with disabilities. To achieve the aforementioned purpose, the researchers studied 10 children with grade 1 or grade 2 disabilities who do not exercise regularly. The researchers used SPSS 21.0 to calculate the averages and standard deviations of the data and performed a paired t-test to verify the differences in averages before and after an exercise. The study showed significant differences in lean body weight, muscular strength, cardiovascular endurance, flexibility, and muscular endurance. The researchers found statistically significant differences in triglyceride as well as in immunoglobulin G. The findings suggest that aquatic exercise affects the health-related physical fitness, blood fat, and immune functions of children with disabilities. PMID:29740565
Code of Federal Regulations, 2010 CFR
2010-10-01
... OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.2 Purpose. The purpose of this part is to... manufacturers' plans for complying with average fuel economy standards and in preparing an annual review of the average fuel economy standards. ...
Code of Federal Regulations, 2014 CFR
2014-10-01
... OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.2 Purpose. The purpose of this part is to... manufacturers' plans for complying with average fuel economy standards and in preparing an annual review of the average fuel economy standards. ...
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.2 Purpose. The purpose of this part is to... manufacturers' plans for complying with average fuel economy standards and in preparing an annual review of the average fuel economy standards. ...
Code of Federal Regulations, 2011 CFR
2011-10-01
... OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.2 Purpose. The purpose of this part is to... manufacturers' plans for complying with average fuel economy standards and in preparing an annual review of the average fuel economy standards. ...
Code of Federal Regulations, 2013 CFR
2013-10-01
... OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.2 Purpose. The purpose of this part is to... manufacturers' plans for complying with average fuel economy standards and in preparing an annual review of the average fuel economy standards. ...
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard.
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than g free with g=2.002644=g free ·(1+162ppm) with a relative uncertainty of 15ppm. This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
National mortality rates: the impact of inequality?
Wilkinson, R G
1992-01-01
Although health is closely associated with income differences within each country there is, at best, only a weak link between national mortality rates and average income among the developed countries. On the other hand, there is evidence of a strong relationship between national mortality rates and the scale of income differences within each society. These three elements are coherent if health is affected less by changes in absolute material standards across affluent populations than it is by relative income or the scale of income differences and the resulting sense of disadvantage within each society. Rather than socioeconomic mortality differentials representing a distribution around given national average mortality rates, it is likely that the degree of income inequality indicates the burden of relative deprivation on national mortality rates. PMID:1636827
Response of freshwater algae to water quality in Qinshan Lake within Taihu Watershed, China
NASA Astrophysics Data System (ADS)
Zhang, Jianying; Ni, Wanmin; Luo, Yang; Jan Stevenson, R.; Qi, Jiaguo
Although frequent algal blooms in Taihu Lake in China have become major environmental problems and have drawn national and international attention, little is understood about the relationship between algal blooms and water quality. The goal of this study was to assess the growth and species responses of freshwater algae to variation in water quality in Qinshan Lake, located in headwaters of the Taihu watershed. Water samples were collected monthly from ten study sites in the Qinshan Lake and were analyzed for species distribution of freshwater algae and physiochemical parameters such as total nitrogen (TN), NH4+-N, NO3--N, total phosphorus (TP), chemical oxygen demand (COD Mn) and Chl-a. The results showed that average TN was 4.47 mg/L, with 92.2% of values greater than the TN standard set by the Chinese Environmental Protection Agency; average TP was 0.051 mg/L, with 37.9% of values above the TP national standard; and average trophic level index (TLI) was 53, the lower end of eutrophic condition. Average Chl-a concentration was 12.83 mg/m 3. Green algae and diatom far outweighed other freshwater algae and were dominant most time of the year, with the highest relative abundances of 96% and 99%, respectively. Blue-green algae, composed mainly toxic strains like Microcystis sp ., Nostoc sp. and Oscillatoria sp., became most dominant in the summer with the maximum relative abundance of 69%. The blue-green algae sank to the lake bottom to overwinter, and then dinoflagellates became the dominant species in the winter, with highest relative abundance of 89%. Analysis indicated that nutrients, especially control of ammonia and co-varying nutrients were the major restrictive factor of population growth of blue-green algae, suggesting that control in nutrient enrichments is the major preventive measure of algal blooms in Qinshan Lake.
Francis, Andre; Hugh, Oliver; Gardosi, Jason
2018-02-01
Fetal growth abnormalities are linked to stillbirth and other adverse pregnancy outcomes, and use of the correct birthweight standard is essential for accurate assessment of growth status and perinatal risk. Two competing, conceptually opposite birthweight standards are currently being implemented internationally: customized gestation-related optimal weight (GROW) and INTERGROWTH-21 st . We wanted to compare their performance when applied to a multiethnic international cohort, and evaluate their usefulness in the assessment of stillbirth risk at term. We analyzed routinely collected maternity data from 10 countries with a total of 1.25 million term pregnancies in their respective main ethnic groups. The 2 standards were applied to determine small for gestational age (SGA) and large for gestational age (LGA) rates, with associated relative risk and population-attributable risk of stillbirth. The customized standard (GROW) was based on the term optimal weight adjusted for maternal height, weight, parity, and ethnic origin, while INTERGROWTH-21 st was a fixed standard derived from a multiethnic cohort of low-risk pregnancies. The customized standard showed an average SGA rate of 10.5% (range 10.1-12.7) and LGA rate of 9.5% (range 7.3-9.9) for the set of cohorts. In contrast, there was a wide variation in SGA and LGA rates with INTERGROWTH-21 st , with an average SGA rate of 4.4% (range 3.1-16.8) and LGA rate of 20.6% (range 5.1-27.5). This variation in INTERGROWTH-21 st SGA and LGA rates was correlated closely (R = ±0.98) to the birthweights predicted for the 10 country cohorts by the customized method to derive term optimal weight, suggesting that they were mostly due to physiological variation in birthweight. Of the 10.5% of cases defined as SGA according to the customized standard, 4.3% were also SGA by INTERGROWTH-21 st and had a relative risk of 3.5 (95% confidence interval, 3.1-4.1) for stillbirth. A further 6.3% (60% of the whole customized SGA) were not SGA by INTERGROWTH-21 st , and had a relative risk of 1.9 (95% confidence interval, 3.1-4.1) for stillbirth. An additional 0.2% of cases were SGA by INTERGROWTH-21 st only, and had no increased risk of stillbirth. At the other end, customized assessment classified 9.5% of births as large for gestational age, most of which (9.0%) were also LGA by the INTERGROWTH-21 st standard. INTERGROWTH-21 st identified a further 11.6% as LGA, which, however, had a reduced risk of stillbirth (relative risk, 0.6; 95% confidence interval, 0.5-0.7). Customized assessment resulted in increased identification of small for gestational age and stillbirth risk, while the wide variation in SGA rates using the INTERGROWTH-21 st standard appeared to mostly reflect differences in physiological pregnancy characteristics in the 10 maternity populations. Copyright © 2018 Elsevier Inc. All rights reserved.
2009-06-05
Acute malnutrition among children aged 6-59 months is a key indicator routinely used for describing the presence and magnitude of humanitarian emergencies. In the past, the prevalence of acute malnutrition and admissions to feeding programs has been determined using the growth reference developed by the World Health Organization (WHO), CDC, and the National Center for Health Statistics (NCHS). In 2006, WHO released new international growth standards and recommended their use in all nutrition programs. To evaluate the impact of transitioning to the new standards, CDC analyzed anthropometric data for children aged 6-59 months from Darfur, Sudan, collected during 2005-2007. This report describes the results of that analysis, which indicated that use of the new standards would have increased the prevalence of global acute malnutrition on average by 14% and would have increased the prevalence of severe acute malnutrition on average by 100%. Admissions to feeding programs would have increased by 56% for moderately malnourished children and by 260% for severely malnourished children. For programs in Darfur, this would have resulted in approximately 23,200 more children eligible for therapeutic feeding programs. For the immediate future, the prevalence of acute malnutrition in children should be reported using both the old WHO/CDC/NCHS reference and the new WHO standards. More research is needed to better ascertain the validity of the admission criteria based on the new WHO standards in predicting malnutrition-related morbidity and mortality.
A note on calculation of efficiency and emissions from wood and wood pellet stoves
NASA Astrophysics Data System (ADS)
Petrocelli, D.; Lezzi, A. M.
2015-11-01
In recent years, national laws and international regulations have introduced strict limits on efficiency and emissions from woody biomass appliances to promote the diffusion of models characterized by low emissions and high efficiency. The evaluation of efficiency and emissions is made during the certification process which consists in standardized tests. Standards prescribe the procedures to be followed during tests and the relations to be used to determine the mean value of efficiency and emissions. As a matter of fact these values are calculated using flue gas temperature and composition averaged over the whole test period, lasting from 1 to 6 hours. Typically, in wood appliances the fuel burning rate is not constant and this leads to a considerable variation in time of composition and flow rate of the flue gas. In this paper we show that this fact may cause significant differences between emission values calculated according to standards and those obtained integrating over the test period the instantaneous mass and energy balances. In addition, we propose some approximated relations and a method for wood stoves which supply more accurate results than those calculated according to standards. These relations can be easily implemented in a computer controlled data acquisition systems.
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1984-01-01
A new high-performance liquid chromatographic (HPLC) method for group-type analysis of middistillate fuels is described. It uses a refractive index detector and standards that are prepared by reacting a portion of the fuel sample with sulfuric acid. A complete analysis of a middistillate fuel for saturates and aromatics (including the preparation of the standard) requires about 15 min if standards for several fuels are prepared simultaneously. From model fuel studies, the method was found to be accurate to within 0.4 vol% saturates or aromatics, and provides a precision of + or - 0.4 vol%. Olefin determinations require an additional 15 min of analysis time. However, this determination is needed only for those fuels displaying a significant olefin response at 200 nm (obtained routinely during the saturated/aromatics analysis procedure). The olefin determination uses the responses of the olefins and the corresponding saturates, as well as the average value of their refractive index sensitivity ratios (1.1). Studied indicated that, although the relative error in the olefins result could reach 10 percent by using this average sensitivity ratio, it was 5 percent for the fuels used in this study. Olefin concentrations as low as 0.1 vol% have been determined using this method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m
2010-04-15
This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less
Hargrove, John S; Weyl, Olaf L F; Allen, Micheal S; Deacon, Neil R
2015-01-01
Fishes are one of the most commonly introduced aquatic taxa worldwide, and invasive fish species pose threats to biodiversity and ecosystem function in recipient waters. Considerable research efforts have focused on predicting the invasibility of different fish taxa; however, accurate records detailing the establishment and spread of invasive fishes are lacking for large numbers of fish around the globe. In response to these data limitations, a low-cost method of cataloging and quantifying the temporal and spatial status of fish invasions was explored. Specifically, angler catch data derived from competitive bass angling tournaments was used to document the distribution of 66 non-native populations of black bass (Micropterus spp.) in southern Africa. Additionally, catch data from standardized tournament events were used to assess the abundance and growth of non-native bass populations in southern Africa relative to their native distribution (southern and eastern United States). Differences in metrics of catch per unit effort (average number of fish retained per angler per day), daily bag weights (the average weight of fish retained per angler), and average fish weight were assessed using catch data from 14,890 angler days of tournament fishing (11,045 days from South Africa and Zimbabwe; 3,845 days from the United States). No significant differences were found between catch rates, average daily bag weight, or the average fish weight between countries, suggesting that bass populations in southern Africa reach comparable sizes and numbers relative to waters in their native distribution. Given the minimal cost associated with data collection (i.e. records are collected by tournament organizers), the standardized nature of the events, and consistent bias (i.e. selection for the biggest fish in a population), the use of angler catch data represents a novel approach to infer the status and distribution of invasive sport fish.
Hargrove, John S.; Weyl, Olaf L. F.; Allen, Micheal S.; Deacon, Neil R.
2015-01-01
Fishes are one of the most commonly introduced aquatic taxa worldwide, and invasive fish species pose threats to biodiversity and ecosystem function in recipient waters. Considerable research efforts have focused on predicting the invasibility of different fish taxa; however, accurate records detailing the establishment and spread of invasive fishes are lacking for large numbers of fish around the globe. In response to these data limitations, a low-cost method of cataloging and quantifying the temporal and spatial status of fish invasions was explored. Specifically, angler catch data derived from competitive bass angling tournaments was used to document the distribution of 66 non-native populations of black bass (Micropterus spp.) in southern Africa. Additionally, catch data from standardized tournament events were used to assess the abundance and growth of non-native bass populations in southern Africa relative to their native distribution (southern and eastern United States). Differences in metrics of catch per unit effort (average number of fish retained per angler per day), daily bag weights (the average weight of fish retained per angler), and average fish weight were assessed using catch data from 14,890 angler days of tournament fishing (11,045 days from South Africa and Zimbabwe; 3,845 days from the United States). No significant differences were found between catch rates, average daily bag weight, or the average fish weight between countries, suggesting that bass populations in southern Africa reach comparable sizes and numbers relative to waters in their native distribution. Given the minimal cost associated with data collection (i.e. records are collected by tournament organizers), the standardized nature of the events, and consistent bias (i.e. selection for the biggest fish in a population), the use of angler catch data represents a novel approach to infer the status and distribution of invasive sport fish. PMID:26047487
Okuma, Kazu; Yamochi, Tadanori; Sato, Tomoo; Sasaki, Daisuke; Hasegawa, Hiroo; Umeki, Kazumi; Kubota, Ryuji; Sobata, Rieko; Matsumoto, Chieko; Kaneko, Noriaki; Naruse, Isao; Yamagishi, Makoto; Nakashima, Makoto; Momose, Haruka; Araki, Kumiko; Mizukami, Takuo; Mizusawa, Saeko; Okada, Yoshiaki; Ochiai, Masaki; Utsunomiya, Atae; Koh, Ki-Ryang; Ogata, Masao; Nosaka, Kisato; Uchimaru, Kaoru; Iwanaga, Masako; Sagara, Yasuko; Yamano, Yoshihisa; Satake, Masahiro; Okayama, Akihiko; Mochizuki, Manabu; Izumo, Shuji; Saito, Shigeru; Itabashi, Kazuo; Kamihira, Shimeru; Yamaguchi, Kazunari; Watanabe, Toshiki
2015-01-01
Quantitative PCR (qPCR) analysis of human T-cell leukemia virus type 1 (HTLV-1) was used to assess the amount of HTLV-1 provirus DNA integrated into the genomic DNA of host blood cells. Accumulating evidence indicates that a high proviral load is one of the risk factors for the development of adult T-cell leukemia/lymphoma and HTLV-1-associated myelopathy/tropical spastic paraparesis. However, interlaboratory variability in qPCR results makes it difficult to assess the differences in reported proviral loads between laboratories. To remedy this situation, we attempted to minimize discrepancies between laboratories through standardization of HTLV-1 qPCR in a collaborative study. TL-Om1 cells that harbor the HTLV-1 provirus were serially diluted with peripheral blood mononuclear cells to prepare a candidate standard. By statistically evaluating the proviral loads of the standard and those determined using in-house qPCR methods at each laboratory, we determined the relative ratios of the measured values in the laboratories to the theoretical values of the TL-Om1 standard. The relative ratios of the laboratories ranged from 0.84 to 4.45. Next, we corrected the proviral loads of the clinical samples from HTLV-1 carriers using the relative ratio. As expected, the overall differences between the laboratories were reduced by half, from 7.4-fold to 3.8-fold on average, after applying the correction. HTLV-1 qPCR can be standardized using TL-Om1 cells as a standard and by determining the relative ratio of the measured to the theoretical standard values in each laboratory. PMID:26292315
Kuramitsu, Madoka; Okuma, Kazu; Yamochi, Tadanori; Sato, Tomoo; Sasaki, Daisuke; Hasegawa, Hiroo; Umeki, Kazumi; Kubota, Ryuji; Sobata, Rieko; Matsumoto, Chieko; Kaneko, Noriaki; Naruse, Isao; Yamagishi, Makoto; Nakashima, Makoto; Momose, Haruka; Araki, Kumiko; Mizukami, Takuo; Mizusawa, Saeko; Okada, Yoshiaki; Ochiai, Masaki; Utsunomiya, Atae; Koh, Ki-Ryang; Ogata, Masao; Nosaka, Kisato; Uchimaru, Kaoru; Iwanaga, Masako; Sagara, Yasuko; Yamano, Yoshihisa; Satake, Masahiro; Okayama, Akihiko; Mochizuki, Manabu; Izumo, Shuji; Saito, Shigeru; Itabashi, Kazuo; Kamihira, Shimeru; Yamaguchi, Kazunari; Watanabe, Toshiki; Hamaguchi, Isao
2015-11-01
Quantitative PCR (qPCR) analysis of human T-cell leukemia virus type 1 (HTLV-1) was used to assess the amount of HTLV-1 provirus DNA integrated into the genomic DNA of host blood cells. Accumulating evidence indicates that a high proviral load is one of the risk factors for the development of adult T-cell leukemia/lymphoma and HTLV-1-associated myelopathy/tropical spastic paraparesis. However, interlaboratory variability in qPCR results makes it difficult to assess the differences in reported proviral loads between laboratories. To remedy this situation, we attempted to minimize discrepancies between laboratories through standardization of HTLV-1 qPCR in a collaborative study. TL-Om1 cells that harbor the HTLV-1 provirus were serially diluted with peripheral blood mononuclear cells to prepare a candidate standard. By statistically evaluating the proviral loads of the standard and those determined using in-house qPCR methods at each laboratory, we determined the relative ratios of the measured values in the laboratories to the theoretical values of the TL-Om1 standard. The relative ratios of the laboratories ranged from 0.84 to 4.45. Next, we corrected the proviral loads of the clinical samples from HTLV-1 carriers using the relative ratio. As expected, the overall differences between the laboratories were reduced by half, from 7.4-fold to 3.8-fold on average, after applying the correction. HTLV-1 qPCR can be standardized using TL-Om1 cells as a standard and by determining the relative ratio of the measured to the theoretical standard values in each laboratory. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
ERIC Educational Resources Information Center
Jensen, Arthur R.
Charles Spearman originally suggested in 1927 that the varying magnitudes of the mean differences between whites and blacks in standardized scores on a variety of mental tests are directly related to the size of the tests' loadings on g, the general factor common to all complex tests of mental ability. Several independent large-scale studies…
National Highway Traffic Safety Administration Corporate Average Fuel Economy (CAFE) Standards
DOT National Transportation Integrated Search
2003-01-01
The National Highway Traffic Safety Administration (NHTSA) must set Corporate Average Fuel Economy (CAFE) standards for light trucks. This was authorized by the Energy Policy and Conservation Act, which added Title V: Imporving Automotive Fuel Effici...
Multiplets: Their behavior and utility at dacitic and andesitic volcanic centers
Thelen, W.; Malone, S.; West, M.
2011-01-01
Multiplets, or groups of earthquakes with similar waveforms, are commonly observed at volcanoes, particularly those exhibiting unrest. Using triggered seismic data from the 1980-1986 Mount St. Helens (MSH) eruption, we have constructed a catalog of multiplet occurrence. Our analysis reveals that the occurrence of multiplets is related, at least in part, to the viscosity of the magma. We also constructed catalogs of multiplet occurrence using continuous seismic data from the 2004 eruption at MSH and 2007 eruption at Bezymianny Volcano, Russia. Prior to explosions at MSH in 2004 and Bezymianny in 2007, the multiplet proportion of total seismicity (MPTS) declined, while the average amplitudes and standard deviations of the average amplitude increased. The life spans of multiplets (time between the first and last event) were also shorter prior to explosions than during passive lava extrusion. Dome-forming eruptions that include a partially solidified plug, like MSH (1983-1986 and 2004-2008), often possess multiplets with longer life spans and MPTS values exceeding 50%. Conceptually, the relatively unstable environment prior to explosions is characterized by large and variable stress gradients brought about by rapidly changing overpressures within the conduit. We infer that such complex stress fields affect the number of concurrent families, MPTS, average amplitude, and standard deviation of the amplitude of the multiplets. We also argue that multiplet detection may be an important new monitoring tool for determining the timing of explosions and in forecasting the type of eruption.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
40 CFR 49.125 - Rule for limiting the emissions of particulate matter.
Code of Federal Regulations, 2010 CFR
2010-07-01
... used exclusively for space heating with a rated heat input capacity of less than 400,000 British... average of 0.23 grams per dry standard cubic meter (0.1 grains per dry standard cubic foot), corrected to... boiler stack must not exceed an average of 0.46 grams per dry standard cubic meter (0.2 grains per dry...
Fu, Xi; Qiao, Jia; Girod, Sabine; Niu, Feng; Liu, Jian Feng; Lee, Gordon K; Gui, Lai
2017-09-01
Mandible contour surgery, including reduction gonioplasty and genioplasty, has become increasingly popular in East Asia. However, it is technically challenging and, hence, leads to a long learning curve and high complication rates and often needs secondary revisions. The increasing use of 3-dimensional (3D) technology makes accurate single-stage mandible contour surgery with minimum complication rates possible with a virtual surgical plan (VSP) and 3-D surgical templates. This study is to establish a standardized protocol for VSP and 3-D surgical templates-assisted mandible contour surgery and evaluate the accuracy of the protocol. In this study, we enrolled 20 patients for mandible contour surgery. Our protocol is to perform VSP based on 3-D computed tomography data. Then, design and 3-D print surgical templates based on preoperative VSP. The accuracy of the method was analyzed by 3-D comparison of VSP and postoperative results using detailed computer analysis. All patients had symmetric, natural osteotomy lines and satisfactory facial ratios in a single-stage operation. The average relative error of VSP and postoperative result on the entire skull was 0.41 ± 0.13 mm. The average new left gonial error was 0.43 ± 0.77 mm. The average new right gonial error was 0.45 ± 0.69 mm. The average pognion error was 0.79 ± 1.21 mm. Patients were very satisfied with the aesthetic results. Surgeons were very satisfied with the performance of surgical templates to facilitate the operation. Our standardized protocol of VSP and 3-D printed surgical templates-assisted single-stage mandible contour surgery results in accurate, safe, and predictable outcome in a single stage.
Noise pollution in intensive care units and emergency wards.
Khademi, Gholamreza; Roudi, Masoumeh; Shah Farhat, Ahmad; Shahabian, Masoud
2011-01-01
The improvement of technology has increased noise levels in hospital Wards to higher than international standard levels (35-45 dB). Higher noise levels than the maximum level result in patient's instability and dissatisfaction. Moreover, it will have serious negative effects on the staff's health and the quality of their services. The purpose of this survey is to analyze the level of noise in intensive care units and emergency wards of the Imam Reza Teaching Hospital, Mashhad. This research was carried out in November 2009 during morning shifts between 7:30 to 12:00. Noise levels were measured 10 times at 30-minute intervals in the nursing stations of 10 wards of the emergency, the intensive care units, and the Nephrology and Kidney Transplant Departments of Imam Reza University Hospital, Mashhad. The noise level in the nursing stations was tested for both the maximum level (Lmax) and the equalizing level (Leq). The research was based on the comparison of equalizing levels (Leq) because maximum levels were unstable. In our survey the average level (Leq) in all wards was much higher than the standard level. The maximum level (Lmax) in most wards was 85-86 dB and just in one measurement in the Internal ICU reached 94 dB. The average level of Leq in all wards was 60.2 dB. In emergency units, it was 62.2 dB, but it was not time related. The highest average level (Leq) was measured at 11:30 AM and the peak was measured in the Nephrology nursing station. The average levels of noise in intensive care units and also emergency wards were more than the standard levels and as it is known these wards have vital roles in treatment procedures, so more attention is needed in this area.
Rhodes, G; Yoshikawa, S; Clark, A; Lee, K; McKay, R; Akamatsu, S
2001-01-01
Averageness and symmetry are attractive in Western faces and are good candidates for biologically based standards of beauty. A hallmark of such standards is that they are shared across cultures. We examined whether facial averageness and symmetry are attractive in non-Western cultures. Increasing the averageness of individual faces, by warping those faces towards an averaged composite of the same race and sex, increased the attractiveness of both Chinese (experiment 1) and Japanese (experiment 2) faces, for Chinese and Japanese participants, respectively. Decreasing averageness by moving the faces away from an average shape decreased attractiveness. We also manipulated the symmetry of Japanese faces by blending each original face with its mirror image to create perfectly symmetric versions. Japanese raters preferred the perfectly symmetric versions to the original faces (experiment 2). These findings show that preferences for facial averageness and symmetry are not restricted to Western cultures, consistent with the view that they are biologically based. Interestingly, it made little difference whether averageness was manipulated by using own-race or other-race averaged composites and there was no preference for own-race averaged composites over other-race or mixed-race composites (experiment 1). We discuss the implications of these results for understanding what makes average faces attractive. We also discuss some limitations of our studies, and consider other lines of converging evidence that may help determine whether preferences for average and symmetric faces are biologically based.
Cabieses, Baltica; Cookson, Richard; Espinoza, Manuel; Santorelli, Gillian; Delgado, Iris
2015-01-01
Chile, a South American country recently defined as a high-income nation, carried out a major healthcare system reform from 2005 onwards that aimed at reducing socioeconomic inequality in health. This study aimed to estimate income-related inequality in self-reported health status (SRHS) in 2000 and 2013, before and after the reform, for the entire adult Chilean population. Using data on equivalized household income and adult SRHS from the 2000 and 2013 CASEN surveys (independent samples of 101 046 and 172 330 adult participants, respectively) we estimated Erreygers concentration indices (CIs) for above average SRHS for both years. We also decomposed the contribution of both "legitimate" standardizing variables (age and sex) and "illegitimate" variables (income, education, occupation, ethnicity, urban/rural, marital status, number of people living in the household, and healthcare entitlement). There was a significant concentration of above average SRHS favoring richer people in Chile in both years, which was less pronounced in 2013 than 2000 (Erreygers corrected CI 0.165 [Standard Error, SE 0.007] in 2000 and 0.047 [SE 0.008] in 2013). To help interpret the magnitude of this decline, adults in the richest fifth of households were 33% more likely than those in the poorest fifth to report above-average health in 2000, falling to 11% in 2013. In 2013, the contribution of illegitimate factors to income-related inequality in SRHS remained higher than the contribution of legitimate factors. Income-related inequality in SRHS in Chile has fallen after the equity-based healthcare reform. Further research is needed to ascertain how far this fall in health inequality can be attributed to the 2005 healthcare reform as opposed to economic growth and other determinants of health that changed during the period.
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.2548 - Average moisture content determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...
7 CFR 51.2561 - Average moisture content.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL...
A new method to detect event-related potentials based on Pearson's correlation.
Giroldini, William; Pederzoli, Luciano; Bilucaglia, Marco; Melloni, Simone; Tressoldi, Patrizio
2016-12-01
Event-related potentials (ERPs) are widely used in brain-computer interface applications and in neuroscience. Normal EEG activity is rich in background noise, and therefore, in order to detect ERPs, it is usually necessary to take the average from multiple trials to reduce the effects of this noise. The noise produced by EEG activity itself is not correlated with the ERP waveform and so, by calculating the average, the noise is decreased by a factor inversely proportional to the square root of N , where N is the number of averaged epochs. This is the easiest strategy currently used to detect ERPs, which is based on calculating the average of all ERP's waveform, these waveforms being time- and phase-locked. In this paper, a new method called GW6 is proposed, which calculates the ERP using a mathematical method based only on Pearson's correlation. The result is a graph with the same time resolution as the classical ERP and which shows only positive peaks representing the increase-in consonance with the stimuli-in EEG signal correlation over all channels. This new method is also useful for selectively identifying and highlighting some hidden components of the ERP response that are not phase-locked, and that are usually hidden in the standard and simple method based on the averaging of all the epochs. These hidden components seem to be caused by variations (between each successive stimulus) of the ERP's inherent phase latency period (jitter), although the same stimulus across all EEG channels produces a reasonably constant phase. For this reason, this new method could be very helpful to investigate these hidden components of the ERP response and to develop applications for scientific and medical purposes. Moreover, this new method is more resistant to EEG artifacts than the standard calculations of the average and could be very useful in research and neurology. The method we are proposing can be directly used in the form of a process written in the well-known Matlab programming language and can be easily and quickly written in any other software language.
Melson, Ambrose J; Monk, Rebecca Louise; Heim, Derek
2016-12-01
Data-driven student drinking norms interventions are based on reported normative overestimation of the extent and approval of an average student's drinking. Self-reported differences between personal and perceived normative drinking behaviors and attitudes are taken at face value as evidence of actual levels of overestimation. This study investigates whether commonly used data collection methods and socially desirable responding (SDR) may inadvertently impede establishing "objective" drinking norms. U.K. students (N = 421; 69% female; mean age 20.22 years [SD = 2.5]) were randomly assigned to 1 of 3 versions of a drinking norms questionnaire: The standard multi-target questionnaire assessed respondents' drinking attitudes and behaviors (frequency of consumption, heavy drinking, units on a typical occasion) as well as drinking attitudes and behaviors for an "average student." Two deconstructed versions of this questionnaire assessed identical behaviors and attitudes for participants themselves or an "average student." The Balanced Inventory of Desirable Responding was also administered. Students who answered questions about themselves and peers reported more extreme perceived drinking attitudes for the average student compared with those reporting solely on the "average student." Personal and perceived reports of drinking behaviors did not differ between multitarget and single-target versions of the questionnaire. Among those who completed the multitarget questionnaire, after controlling for demographics and weekly drinking, SDR was related positively with the magnitude of difference between students' own reported behaviors/attitudes and those perceived for the average student. Standard methodological practices and socially desirable responding may be sources of bias in peer norm overestimation research. Copyright © 2016 by the Research Society on Alcoholism.
Time-dependent gravity in Southern California, May 1974 to April 1979
NASA Technical Reports Server (NTRS)
Whitcomb, J. H.; Franzen, W. O.; Given, J. W.; Pechmann, J. C.; Ruff, L. J.
1980-01-01
The Southern California gravity survey, begun in May 1974 to obtain high spatial and temporal density gravity measurements to be coordinated with long-baseline three dimensional geodetic measurements of the Astronomical Radio Interferometric Earth Surveying project, is presented. Gravity data was obtained from 28 stations located in and near the seismically active San Gabriel section of the Southern California Transverse Ranges and adjoining San Andreas Fault at intervals of one to two months using gravity meters relative to a base station standard meter. A single-reading standard deviation of 11 microGal is obtained which leads to a relative deviation of 16 microGal between stations, with data averaging reducing the standard error to 2 to 3 microGal. The largest gravity variations observed are found to correlate with nearby well water variations and smoothed rainfall levels, indicating the importance of ground water variations to gravity measurements. The largest earthquake to occur during the survey, which extended to April, 1979, is found to be accompanied in the station closest to the earthquake by the largest measured gravity changes that cannot be related to factors other than tectonic distortion.
Fu, Xu Wei; Wu, Yan Jiao; Qu, Jin Rong; Yang, Hong
2012-07-01
A molecularly imprinted polymer (MIP) was prepared using chlorsulfuron (CS), a herbicide as a template molecule, methacrylic acid as a functional monomer, ethylene glycol dimethacrylate (EDMA) as a cross-linker, methanol and toluene as a porogen, and 2,2-azobisisobutyronitrile as an initiator. The binding behaviors of the template chlorsulfuron and its analog on MIP were evaluated by equilibrium adsorption experiments, which showed that the MIP particles had specific affinity for the template CS. Solid-phase extraction (SPE) with the chlorsulfuron molecularly imprinted polymer as an adsorbent was investigated. The optimum loading, washing, and eluting conditions for chlorsulfuron molecularly imprinted polymer solid-phase extraction (CS-MISPE) were established. The optimized CS-MISPE procedure was developed to enrich and clean up the chlorsulfuron residue in water, soils, and wheat plants. Concentrations of chlorsulfuron in the samples were analyzed by HPLC-UVD. The average recoveries of CS spiked standard at 0.05~0.2 mg L(-1) in water were 90.2~93.3%, with the relative standard deviation (RSD) being 2.0~3.9% (n=3). The average recoveries of 1.0 mL CS spiked standard at 0.1~0.5 mg L(-1) in 10 g soil were 91.1~94.7%, with the RSD being 3.1~5.6% (n=3). The average recoveries of 1.0 mL CS spiked standard at 0.1~0.5 mg L(-1) in 5 g wheat plant were 82.3~94.3%, with the RSD being 2.9~6.8% (n=3). Overall, our study provides a sensitive and cost-effective method for accurate determination of CS residues in water, soils, and plants.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or use of fuel injection), and catalyst usage. Limited product line light truck means a light truck..., DEPARTMENT OF TRANSPORTATION LIGHT TRUCK FUEL ECONOMY STANDARDS § 533.4 Definitions. (a) Statutory terms. (1) The terms average fuel economy, average fuel economy standard, fuel economy, import, manufacture...
Code of Federal Regulations, 2014 CFR
2014-10-01
... or use of fuel injection), and catalyst usage. Limited product line light truck means a light truck..., DEPARTMENT OF TRANSPORTATION LIGHT TRUCK FUEL ECONOMY STANDARDS § 533.4 Definitions. (a) Statutory terms. (1) The terms average fuel economy, average fuel economy standard, fuel economy, import, manufacture...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Karisa M.; Wright, Bob W.; Synovec, Robert E.
2007-02-02
First, simulated chromatographic separations with declining retention time precision were used to study the performance of the piecewise retention time alignment algorithm and to demonstrate an unsupervised parameter optimization method. The average correlation coefficient between the first chromatogram and every other chromatogram in the data set was used to optimize the alignment parameters. This correlation method does not require a training set, so it is unsupervised and automated. This frees the user from needing to provide class information and makes the alignment algorithm more generally applicable to classifying completely unknown data sets. For a data set of simulated chromatograms wheremore » the average chromatographic peak was shifted past two neighboring peaks between runs, the average correlation coefficient of the raw data was 0.46 ± 0.25. After automated, optimized piecewise alignment, the average correlation coefficient was 0.93 ± 0.02. Additionally, a relative shift metric and principal component analysis (PCA) were used to independently quantify and categorize the alignment performance, respectively. The relative shift metric was defined as four times the standard deviation of a given peak’s retention time in all of the chromatograms, divided by the peak-width-at-base. The raw simulated data sets that were studied contained peaks with average relative shifts ranging between 0.3 and 3.0. Second, a “real” data set of gasoline separations was gathered using three different GC methods to induce severe retention time shifting. In these gasoline separations, retention time precision improved ~8 fold following alignment. Finally, piecewise alignment and the unsupervised correlation optimization method were applied to severely shifted GC separations of reformate distillation fractions. The effect of piecewise alignment on peak heights and peak areas is also reported. Piecewise alignment either did not change the peak height, or caused it to slightly decrease. The average relative difference in peak height after piecewise alignment was –0.20%. Piecewise alignment caused the peak areas to either stay the same, slightly increase, or slightly decrease. The average absolute relative difference in area after piecewise alignment was 0.15%.« less
Relative air temperature analysis external building on Gowa Campus
NASA Astrophysics Data System (ADS)
Mustamin, Tayeb; Rahim, Ramli; Baharuddin; Jamala, Nurul; Kusno, Asniawaty
2018-03-01
This study aims to data analyze the relative temperature and humidity of the air outside the building. Data retrieval taken from weather monitoring device (monitoring) Vaisala, RTU (Remote Terminal Unit), Which is part of the AWS (Automatic Weather Stations) Then Processing data processed and analyzed by using Microsoft Excel program in the form of graph / picture fluctuation Which shows the average value, standard deviation, maximum value, and minimum value. Results of data processing then grouped in the form: Daily, and monthly, based on time intervals every 30 minutes. The results showed Outside air temperatures in March, April, May and September 2016 Which entered in the thermal comfort zone according to SNI standard (Indonesian National Standard) only at 06.00-10.00. In late March to early April Thermal comfort zone also occurs at 15.30-18.00. The highest maximum air temperature occurred in September 2016 at 11.01-11.30 And the lowest minimum value in September 2016, time 6:00 to 6:30. The result of the next analysis shows the level of data conformity with thermal comfort zone based on SNI (Indonesian National Standard) every month.
Cywinska, A; Hannan, M A; Kevan, P G; Roughley, R E; Iranpour, M; Hunter, F F
2010-12-01
This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two-parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour-joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (∼ 10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli. © 2010 Brock University. Medical and Veterinary Entomology © 2010 The Royal Entomological Society.
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
June and August median streamflows estimated for ungaged streams in southern Maine
Lombard, Pamela J.
2010-01-01
Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Doyle, E; Fowles, S E; Summerfield, S; White, T J
2002-03-25
A method was developed for the determination of tafenoquine (I) in human plasma using high-performance liquid chromatography-tandem mass spectrometry. Prior to analysis, the protein in plasma samples was precipitated with methanol containing [2H3(15N)]tafenoquine (II) to act as an internal standard. The supernatant was injected onto a Genesis-C18 column without any further clean-up. The mass spectrometer was operated in the positive ion mode, employing a heat assisted nebulisation, electrospray interface. Ions were detected in multiple reaction monitoring mode. The assay required 50 microl of plasma and was precise and accurate within the range 2 to 500 ng/ml. The average within-run and between-run relative standard deviations were < 7% at 2 ng/ml and greater concentrations. The average accuracy of validation standards was generally within +/- 4% of the nominal concentration. There was no evidence of instability of I in human plasma following three complete freeze-thaw cycles and samples can safely be stored for at least 8 months at approximately -70 degrees C. The method was very robust and has been successfully applied to the analysis of clinical samples from patients and healthy volunteers dosed with I.
Zaugg, Steven D.; Smith, Steven G.; Schroeder, Michael P.
2006-01-01
A method for the determination of 69 compounds typically found in domestic and industrial wastewater is described. The method was developed in response to increasing concern over the impact of endocrine-disrupting chemicals on aquatic organisms in wastewater. This method also is useful for evaluating the effects of combined sanitary and storm-sewer overflow on the water quality of urban streams. The method focuses on the determination of compounds that are indicators of wastewater or have endocrine-disrupting potential. These compounds include the alkylphenol ethoxylate nonionic surfactants, food additives, fragrances, antioxidants, flame retardants, plasticizers, industrial solvents, disinfectants, fecal sterols, polycyclic aromatic hydrocarbons, and high-use domestic pesticides. Wastewater compounds in whole-water samples were extracted using continuous liquid-liquid extractors and methylene chloride solvent, and then determined by capillary-column gas chromatography/mass spectrometry. Recoveries in reagent-water samples fortified at 0.5 microgram per liter averaged 72 percent ? 8 percent relative standard deviation. The concentration of 21 compounds is always reported as estimated because method recovery was less than 60 percent, variability was greater than 25 percent relative standard deviation, or standard reference compounds were prepared from technical mixtures. Initial method detection limits averaged 0.18 microgram per liter. Samples were preserved by adding 60 grams of sodium chloride and stored at 4 degrees Celsius. The laboratory established a sample holding-time limit prior to sample extraction of 14 days from the date of collection.
Wong, Mitchell D; Strom, Danielle; Guerrero, Lourdes R; Chung, Paul J; Lopez, Desiree; Arellano, Katherine; Dudovitz, Rebecca N
2017-08-01
We examined whether standardized test scores and grades are related to risky behaviors among low-income minority adolescents and whether social networks and social-emotional factors explained those relationships. We analyzed data from 929 high school students exposed by natural experiment to high- or low-performing academic environments in Los Angeles. We collected information on grade point average (GPA), substance use, sexual behaviors, participation in fights, and carrying a weapon from face-to-face interviews and obtained California math and English standardized test results. Logistic regression and mediation analyses were used to examine the relationship between achievement and risky behaviors. Better GPA and California standardized test scores were strongly associated with lower rates of substance use, high-risk sexual behaviors, and fighting. The unadjusted relative odds of monthly binge drinking was 0.72 (95% confidence interval, 0.56-0.93) for 1 SD increase in standardized test scores and 0.46 (95% confidence interval, 0.29-0.74) for GPA of B- or higher compared with C+ or lower. Most associations disappeared after controlling for social-emotional and social network factors. Averaged across the risky behaviors, mediation analysis revealed social-emotional factors accounted for 33% of the relationship between test scores and risky behaviors and 43% of the relationship between GPA with risky behaviors. Social network characteristics accounted for 31% and 38% of the relationship between behaviors with test scores and GPA, respectively. Demographic factors, parenting, and school characteristics were less important explanatory factors. Social-emotional factors and social network characteristics were the strongest explanatory factors of the achievement-risky behavior relationship and might be important to understanding the relationship between academic achievement and risky behaviors. Published by Elsevier Inc.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.4 Definitions. (a... section 501 of the Act. (2) The terms average fuel economy, fuel economy, and model type are used as... economy standard are requested under this part; Production mix means the number of passenger automobiles...
49 CFR 525.6 - Requirements for petition.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.6 Requirements... arguments of the petitioner supporting the exemption and alternative average fuel economy standard requested... analyses used to develop that information and data. No documents may be incorporated by reference in a...
DOT National Transportation Integrated Search
2008-06-01
The National Highway Traffic Safety Administration (NHTSA) has prepared this Draft Environmental Impact Statement (DEIS) to disclose and analyze the potential environmental impacts of the proposed new Corporate Average Fuel Economy (CAFE) standards a...
NASA Astrophysics Data System (ADS)
Stolz, Douglas C.; Rutledge, Steven A.; Pierce, Jeffrey R.; van den Heever, Susan C.
2017-07-01
The objective of this study is to determine the relative contributions of normalized convective available potential energy (NCAPE), cloud condensation nuclei (CCN) concentrations, warm cloud depth (WCD), vertical wind shear (SHEAR), and environmental relative humidity (RH) to the variability of lightning and radar reflectivity within convective features (CFs) observed by the Tropical Rainfall Measuring Mission (TRMM) satellite. Our approach incorporates multidimensional binned representations of observations of CFs and modeled thermodynamics, kinematics, and CCN as inputs to develop approximations for total lightning density (TLD) and the average height of 30 dBZ radar reflectivity (AVGHT30). The results suggest that TLD and AVGHT30 increase with increasing NCAPE, increasing CCN, decreasing WCD, increasing SHEAR, and decreasing RH. Multiple-linear approximations for lightning and radar quantities using the aforementioned predictors account for significant portions of the variance in the binned data set (R2 ≈ 0.69-0.81). The standardized weights attributed to CCN, NCAPE, and WCD are largest, the standardized weight of RH varies relative to other predictors, while the standardized weight for SHEAR is comparatively small. We investigate these statistical relationships for collections of CFs within various geographic areas and compare the aerosol (CCN) and thermodynamic (NCAPE and WCD) contributions to variations in the CF population in a partial sensitivity analysis based on multiple-linear regression approximations computed herein. A global lightning parameterization is developed; the average difference between predicted and observed TLD decreases from +21.6 to +11.6% when using a hybrid approach to combine separate approximations over continents and oceans, thus highlighting the need for regionally targeted investigations in the future.
Code of Federal Regulations, 2010 CFR
2010-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average and...
Code of Federal Regulations, 2014 CFR
2014-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average and...
Toward Developing a Relative Value Scale for Medical and Surgical Services
Hsiao, William C.; Stason, William B.
1979-01-01
A methodology has been developed to determine the relative values of surgical procedures and medical office visits on the basis of resource costs. The time taken to perform the service and the complexity of that service are the most critical variables. Interspecialty differences in the opportunity costs of training and overhead expenses are also considered. Results indicate some important differences between the relative values based on resource costs and existing standards, prevailing Medicare charges, and California Relative Value Study values. Most dramatic are discrepancies between existing reimbursement levels and resource cost values for office visits compared to surgical procedures. These vary from procedure to procedure and specialty to specialty but indicate that, on the average, office visits are undervalued (or surgical procedures overvalued) by four- to five-fold. After standardizing the variations in the complexity of different procedures, the hourly reimbursement rate in 1978 ranged from $40 for a general practitioner to $200 for surgical specialists. PMID:10309112
Beljaars, P R; Van Dijk, R; Jonker, K M; Schout, L J
1998-01-01
An interlaboratory study of the liquid chromatographic (LC) determination of histamine in fish, sauerkraut, and wine was conducted. Diminuted and homogenized samples were suspended in water followed by clarification of extracts with perchloric acid, filtration, and dilution with water. After LC separation on a reversed-phase C18 column with phosphate buffer (pH 3.0)--acetonitrile (875 + 125, v/v) as mobile phase, histamine was measured fluorometrically (excitation, 340 nm; emission, 455 nm) in samples and standards after postcolumn derivatization with o-phthaldialdehyde (OPA). Fourteen samples (including 6 blind duplicates and 1 split level) containing histamine at about 10-400 mg/kg or mg/L were analyzed singly according to the proposed procedure by 11 laboratories. Results from one participant were excluded from statistical analysis. For all samples analyzed, repeatability relative standard deviations varied from 2.1 to 5.6%, and reproducibility relative standard deviations ranged from 2.2 to 7.1%. Averaged recoveries of histamine for this concentration range varied from 94 to 100%.
Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard
2018-05-01
In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.
Will Commodity Properties Affect Seller's Creditworthy: Evidence in C2C E-commerce Market in China
NASA Astrophysics Data System (ADS)
Peng, Hui; Ling, Min
This paper finds out that the credit rating level shows significant difference among different sub-commodity markets in E-commerce, which provides room for sellers to get higher credit rating by entering businesses with higher average credit level before fraud. In order to study the influence of commodity properties on credit rating, this paper analyzes how commodity properties affect average crediting rating through the degree of information asymmetry, returns and costs of fraud, credibility perception and fraud tolerance. Empirical study shows that Delivery, average trading volume, average price and complaint possibility have decisive impacts on credit performance; brand market share, the degree of standardization and the degree of imitation also have a relatively less significant effect on credit rating. Finally, this paper suggests that important commodity properties should be introduced to modify reputation system, for preventing credit rating arbitrage behavior where sellers move into low-rating commodity after being assigned high credit rating.
Outcome quality standards in pancreatic oncologic surgery in Spain.
Sabater, Luis; Mora, Isabel; Gámez Del Castillo, Juan Manuel; Escrig-Sos, Javier; Muñoz-Forner, Elena; Garcés-Albir, Marina; Dorcaratto, Dimitri; Ortega, Joaquín
2018-05-18
To establish quality standards in oncologic surgery is a complex but necessary challenge to improve surgical outcomes. Unlike other tumors, there are no well-defined quality standards in pancreatic cancer. The aim of this study is to identify quality indicators in pancreatic oncologic surgery in Spain as well as their acceptable limits of variability. Quality indicators were selected based on clinical practice guidelines, consensus conferences, reviews and national publications on oncologic pancreatic surgery between the years 2000 and 2016. Variability margins for each indicator have been determined by statistical process control techniques and graphically represented with the 99.8 and 95% confidence intervals above and below the weighted average according to sample size. The following indicators have been determined with their weighted average and acceptable quality limits: resectability rate 71% (>58%), morbidity 58% (<73%), mortality 4% (<10%), biliary leak 6% (<14%), pancreatic fistula rate 18% (<29%), hemorrhage 11% (<21%), reoperation rate 11% (<20%) and mean hospital stay (<21 days). To date, few related series have been published, and they present important methodological limitations. Among the selected indicators, the morbidity and mortality quality limits have come out higher than those obtained in international standards. It is necessary for Spanish pancreatic surgeons to adopt homogeneous criteria regarding indicators and their definitions to allow for the comparison of their results. Copyright © 2018 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
Meijun Li,; Ellis, Geoffrey S.
2015-01-01
Dibenzofuran (DBF), its alkylated homologues, and benzo[b]naphthofurans (BNFs) are common oxygen-heterocyclic aromatic compounds in crude oils and source rock extracts. A series of positional isomers of alkyldibenzofuran and benzo[b]naphthofuran were identified in mass chromatograms by comparison with internal standards and standard retention indices. The response factors of dibenzofuran in relation to internal standards were obtained by gas chromatography-mass spectrometry analyses of a set of mixed solutions with different concentration ratios. Perdeuterated dibenzofuran and dibenzothiophene are optimal internal standards for quantitative analyses of furan compounds in crude oils and source rock extracts. The average concentration of the total DBFs in oils derived from siliciclastic lacustrine rock extracts from the Beibuwan Basin, South China Sea, was 518 μg/g, which is about 5 times that observed in the oils from carbonate source rocks in the Tarim Basin, Northwest China. The BNFs occur ubiquitously in source rock extracts and related oils of various origins. The results of this work suggest that the relative abundance of benzo[b]naphthofuran isomers, that is, the benzo[b]naphtho[2,1-d]furan/{benzo[b]naphtho[2,1-d]furan + benzo[b]naphtho[1,2-d]furan} ratio, may be a potential molecular geochemical parameter to indicate oil migration pathways and distances.
NASA Astrophysics Data System (ADS)
Akimoto, Takuma; Yamamoto, Eiji
2016-12-01
Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.
NASA Astrophysics Data System (ADS)
Huang, Fang
This study examines elementary science content standards curriculum coherence between the People's Republic of China and the United States of America. Three aspects of curriculum coherence are examined in this study: topic inclusion, topic duration, and curriculum structure. Specifically this study centers on the following research questions: (1) What science knowledge is intended for elementary students in each country? (2) How long each topic stays in the curriculum? (3) How these topics sequence and connect with each other? (4) And finally, what is the implication for elementary science curriculum development? Four intended science curriculum frameworks were selected respectively for each country. A technique of General Topic Trace Mapping (GTTM) was applied to generate the composite science content standards out of the selected curriculum for each country. In comparison, the composite USA and Chinese elementary science content standards form a stark contrast: a bunch of broad topics vs. a focus on a set of key topics at each grade; an average of 3.4 year topic duration vs. an average of 1.68 year topic duration; a stress on connections among related ideas vs. a discrete disposition of related ideas; laundry list topic organization vs. hierarchical organization of science topics. In analyzing the interrelationships among these characteristics, this study reached implications for developing coherent science content standards: First, for the overall curriculum, the topic inclusion should reflect the logical and sequential nature of knowledge in science. Second, for each grade level, less, rather than more science topics should be focused. Third, however, it should be clarified that a balance should be made between curriculum breadth and depth by considering student needs, subject matter, and child development. Fourth, the topic duration should not be too long. The lengthy topic duration tends to undermine links among ideas as well as lead to superficial treatment of topics.
Bovolenta, Tânia M; de Azevedo Silva, Sônia Maria Cesar; Saba, Roberta Arb; Borges, Vanderci; Ferraz, Henrique Ballalai; Felicio, Andre C
2017-01-01
Background Although Parkinson’s disease is the second most prevalent neurodegenerative disease worldwide, its cost in Brazil – South America’s largest country – is unknown. Objective The goal of this study was to calculate the average annual cost of Parkinson’s disease in the city of São Paulo (Brazil), with a focus on disease-related motor symptoms. Subjects and methods This was a retrospective, cross-sectional analysis using a bottom-up approach (ie, from the society’s perspective). Patients (N=260) at two tertiary public health centers, who were residents of the São Paulo metropolitan area, completed standardized questionnaires regarding their disease-related expenses. We used simple and multiple generalized linear models to assess the correlations between total cost and patient-related, as well as disease-related variables. Results The total average annual cost of Parkinson’s disease was estimated at US$5,853.50 per person, including US$3,172.00 in direct costs (medical and nonmedical) and US$2,681.50 in indirect costs. Costs were directly correlated with disease severity (including the degree of motor symptoms), patients’ age, and time since disease onset. Conclusion In this study, we determined the cost of Parkinson’s disease in Brazil and observed that disease-related motor symptoms are a significant component of the costs incurred on the public health system, patients, and society in general. PMID:29276379
Yang, Ping; Fan, Chenggui; Wang, Min; Li, Ling
2017-01-01
In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies. PMID:28529472
Yang, Ping; Fan, Chenggui; Wang, Min; Li, Ling
2017-01-01
In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies.
40 CFR 86.1702-99 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to a point of first sale in the All States Trading Region. Axle ratio means the number of times the... is below the applicable fleet average NMOG standard, times the applicable production for a given... average NMOG standard, times the applicable production for a given model year. NMOG debits have units of g...
40 CFR 86.1702-99 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... to a point of first sale in the All States Trading Region. Axle ratio means the number of times the... is below the applicable fleet average NMOG standard, times the applicable production for a given... average NMOG standard, times the applicable production for a given model year. NMOG debits have units of g...
49 CFR 525.7 - Basis for petition.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.7 Basis for... comply with that average fuel economy standard; and (4) Anticipated consumer demand in the United States... these lubricants, explain the reasons for not so doing. (f) For each affected model year, a fuel economy...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drzymala, R; Alvarez, P; Bednarz, G
2015-06-15
Purpose: The purpose of this multi-institutional study was to compare two new gamma stereotactic radiosurgery (GSRS) dosimetry protocols to existing calibration methods. The ultimate goal was to guide AAPM Task Group 178 in recommending a standard GSRS dosimetry protocol. Methods: Nine centers (ten GSRS units) participated in the study. Each institution made eight sets of dose rate measurements: six with two different ionization chambers in three different 160mm-diameter spherical phantoms (ABS plastic, Solid Water and liquid water), and two using the same ionization chambers with a custom in-air positioning jig. Absolute dose rates were calculated using a newly proposed formalismmore » by the IAEA working group for small and non-standard radiation fields and with a new air-kerma based protocol. The new IAEA protocol requires an in-water ionization chamber calibration and uses previously reported Monte-Carlo generated factors to account for the material composition of the phantom, the type of ionization chamber, and the unique GSRS beam configuration. Results obtained with the new dose calibration protocols were compared to dose rates determined by the AAPM TG-21 and TG-51 protocols, with TG-21 considered as the standard. Results: Averaged over all institutions, ionization chambers and phantoms, the mean dose rate determined with the new IAEA protocol relative to that determined with TG-21 in the ABS phantom was 1.000 with a standard deviation of 0.008. For TG-51, the average ratio was 0.991 with a standard deviation of 0.013, and for the new in-air formalism it was 1.008 with a standard deviation of 0.012. Conclusion: Average results with both of the new protocols agreed with TG-21 to within one standard deviation. TG-51, which does not take into account the unique GSRS beam configuration or phantom material, was not expected to perform as well as the new protocols. The new IAEA protocol showed remarkably good agreement with TG-21. Conflict of Interests: Paula Petti, Josef Novotny, Gennady Neyman and Steve Goetsch are consultants for Elekta Instrument A/B; Elekta Instrument AB, PTW Freiburg GmbH, Standard Imaging, Inc., and The Phantom Laboratory, Inc. loaned equipment for use in these experiments; The University of Wisconsin Accredited Dosimetry Calibration Laboratory provided calibration services.« less
Sacks, Naomi C; Burgess, James F; Cabral, Howard J; McDonnell, Marie E; Pizer, Steven D
2015-08-01
Accurate estimates of the effects of cost sharing on adherence to medications prescribed for use together, also called concurrent adherence, are important for researchers, payers, and policymakers who want to reduce barriers to adherence for chronic condition patients prescribed multiple medications concurrently. But measure definition consensus is lacking, and the effects of different definitions on estimates of cost-related nonadherence are unevaluated. To (a) compare estimates of cost-related nonadherence using different measure definitions and (b) provide guidance for analyses of the effects of cost sharing on concurrent adherence. This is a retrospective cohort study of Medicare Part D beneficiaries aged 65 years and older who used multiple oral antidiabetics concurrently in 2008 and 2009. We compared patients with standard coverage, which contains cost-sharing requirements in deductible (100%), initial (25%), and coverage gap (100%) phases, to patients with a low-income subsidy (LIS) and minimal cost-sharing requirements. Data source was the IMS Health Longitudinal Prescription Database. Patients with standard coverage were propensity matched to controls with LIS coverage. Propensity score was developed using logistic regression to model likelihood of Part D standard enrollment, controlling for sociodemographic and health status characteristics. For analysis, 3 definitions were used for unadjusted and adjusted estimates of adherence: (1) patients adherent to All medications; (2) patients adherent on Average; and (3) patients adherent to Any medication. Analyses were conducted using the full study sample and then repeated in analytic subgroups where patients used (a) 1 or more costly branded oral antidiabetics or (b) inexpensive generics only. We identified 12,771 propensity matched patients with Medicare Part D standard (N = 6,298) or LIS (N = 6,473) coverage who used oral antidiabetics in 2 or more of the same classes in 2008 and 2009. In this sample, estimates of the effects of cost sharing on concurrent adherence varied by measure definition, coverage type, and proportion of patients using more costly branded drugs. Adherence rates ranged from 37% (All: standard patients using 1+ branded) to 97% (Any: LIS using generics only). In adjusted estimates, standard patients using branded drugs had 0.63 (95% CI = 0.57-0.70) and 0.70 (95% CI = 0.63-0.77) times the odds of concurrent adherence using All and Average definitions, respectively. The Any subgroup was not significant (OR = 0.89, 95% CI = 0.87-1.17). Estimates also varied in the full-study sample (All: OR = 0.79, 95% CI = 0.74-0.85; Average: OR = 0.83, 95% CI = 0.77-0.89) and generics-only subgroup, although cost-sharing effects were smaller. The Any subgroup generated no significant estimates. Different concurrent adherence measure definitions lead to markedly different findings of the effects of cost sharing on concurrent adherence, with All and Average subgroups sensitive to these effects. However, when more study patients use inexpensive generics, estimates of these effects on adherence to branded medications with higher cost-sharing requirements may be diluted. When selecting a measure definition, researchers, payers, and policy analysts should consider the range of medication prices patients face, use a measure sensitive to the effects of cost sharing on adherence, and perform subgroup analyses for patients prescribed more medications for which they must pay more, since these patients are most vulnerable to cost-related nonadherence.
[Index assessment of airborne VOCs pollution in automobile for transporting passengers].
Chen, Xiao-Kai; Cheng, He-Ming; Luo, Hui-Long
2013-12-01
Car for transporting passenger is the most common means of transport and in-car airborne volatile organic compounds (VOCs) cause harm to health. In order to analyze the pollution levels of benzene, toluene, ethylbenzene, xylenes, styrene and TVOC, index evaluation method was used according to the domestic and international standards of indoor and in-car air quality (IAQ). For Chinese GB/T 18883-2002 IAQ Standard, GB/T 17729-2009 Hygienic Standard for the Air Quality inside Long Distance Coach, GB/T 27630-2011 Guideline for Air Quality Assessment of Passenger Car, IAQ standard of South Korea, Norway, Japan and Germany, the heaviest pollution of VOCs in passenger car was TVOC, TVOC, benzene, benzene, TVOC, toluene and TVOC, respectively, the average pollution grade of automotive IAQ was median pollution, median pollution, clean, light pollution, median pollution, clean and heavy pollution, respectively. Index evaluation can effectively analyze vehicular interior air quality, and the result has a significant difference with different standards; German standard is the most stringent, while Chinese GB/T 18883-2002 standard is the relatively stringent and GB/T 27630-2011 is the most relaxed.
Time to harmonize national ambient air quality standards.
Kutlar Joss, Meltem; Eeftens, Marloes; Gintowt, Emily; Kappeler, Ron; Künzli, Nino
2017-05-01
The World Health Organization has developed ambient air quality guidelines at levels considered to be safe or of acceptable risk for human health. These guidelines are meant to support governments in defining national standards. It is unclear how they are followed. We compiled an inventory of ambient air quality standards for 194 countries worldwide for six air pollutants: PM 2.5 , PM 10 , ozone, nitrogen dioxide, sulphur dioxide and carbon monoxide. We conducted literature and internet searches and asked country representatives about national ambient air quality standards. We found information on 170 countries including 57 countries that did not set any air quality standards. Levels varied greatly by country and by pollutant. Ambient air quality standards for PM 2.5 , PM 10 and SO 2 poorly complied with WHO guideline values. The agreement was higher for CO, SO 2 (10-min averaging time) and NO 2 . Regulatory differences mirror the differences in air quality and the related burden of disease around the globe. Governments worldwide should adopt science based air quality standards and clean air management plans to continuously improve air quality locally, nationally, and globally.
Garbarino, J.R.; Jones, B.E.; Stein, G.P.
1985-01-01
In an interlaboratory test, inductively coupled plasma atomic emission spectrometry (ICP-AES) was compared with flame atomic absorption spectrometry and molecular absorption spectrophotometry for the determination of 17 major and trace elements in 100 filtered natural water samples. No unacceptable biases were detected. The analysis precision of ICP-AES was found to be equal to or better than alternative methods. Known-addition recovery experiments demonstrated that the ICP-AES determinations are accurate to between plus or minus 2 and plus or minus 10 percent; four-fifths of the tests yielded average recoveries of 95-105 percent, with an average relative standard deviation of about 5 percent.
Meteorological Automatic Weather Station (MAWS) Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holdridge, Donna J; Kyrouac, Jenni A
The Meteorological Automatic Weather Station (MAWS) is a surface meteorological station, manufactured by Vaisala, Inc., dedicated to the balloon-borne sounding system (BBSS), providing surface measurements of the thermodynamic state of the atmosphere and the wind speed and direction for each radiosonde profile. These data are automatically provided to the BBSS during the launch procedure and included in the radiosonde profile as the surface measurements of record for the sounding. The MAWS core set of measurements is: Barometric Pressure (hPa), Temperature (°C), Relative Humidity (%), Arithmetic-Averaged Wind Speed (m/s), and Vector-Averaged Wind Direction (deg). The sensors that collect the core variablesmore » are mounted at the standard heights defined for each variable.« less
Adibi, Mehrad; Pearle, Margaret S; Lotan, Yair
2012-07-01
Multiple studies have shown an increase in the hospital admission rates due to infectious complications after transrectal ultrasonography (TRUS)-guided prostate biopsy (TRUSBx), mostly related to a rise in the prevalence of fluoroquinolone-resistant organisms. As a result, multiple series have advocated the use of more intensive prophylactic antibiotic regimens to augment the effect of the widely used fluoroquinolone prophylaxis for TRUSBx. The present study compares the cost-effectiveness fluoroquinolone prophylaxis to more intensive prophylactic antibiotic regimens, which is an important consideration for any antibiotic regimen used on a wide-scale for TRUSBx prophylaxis. To compare the cost-effectiveness of fluoroquinolones vs intensive antibiotic regimens for transrectal ultrasonography (TRUS)-guided prostate biopsy (TRUSBx) prophylaxis. Risk of hospital admission for infectious complications after TRUSBx was determined from published data. The average cost of hospital admission due to post-biopsy infection was determined from patients admitted to our University hospital ≤1 week of TRUSBx. A decision tree analysis was created to compare cost-effectiveness of standard vs intensive antibiotic prophylactic regimens based on varying risk of infection, cost, and effectiveness of the intensive antibiotic regimen. Baseline assumption included cost of TRUSBx ($559), admission rate (1%), average cost of admission ($5900) and cost of standard and intensive antibiotic regimens of $1 and $33, respectively. Assuming a 50% risk reduction in admission rates with intensive antibiotics, the standard regimen was slightly less costly with average cost of $619 vs $622, but was associated with twice as many infections. Sensitivity analyses found that a 1.1% risk of admission for quinolone-resistant infections or a 54% risk reduction attributed to the more intensive antibiotic regimen will result in cost-equivalence for the two regimens. Three-way sensitivity analyses showed that small increases in probability of admission using the standard antibiotics or greater risk reduction using the intensive regimen result in the intensive prophylactic regimen becoming substantially more cost-effectiveness even at higher costs. As the risk of admission for infectious complications due to TRUSBx increases, use of an intensive prophylactic antibiotic regimen becomes significantly more cost-effective than current standard antibiotic prophylaxis. © 2011 BJU INTERNATIONAL.
Açikgöz, Ayla; Ergör, Gül
2011-01-01
Cervical cancer screening with Pap smear test is a cost-effective method. The Ministry of Health in Turkey recommends that it be performed once every five years after age 35. The purpose of this study was to determine the cervical cancer risk levels of women between 35 and 69, and the intervals they have the Pap smear test, and to investigate the relation between the two. This study was performed on 227 women aged between 35 and 69 living in Balçova District of İzmir province. Using the cervical cancer risk index program of Harvard School of Public Health, the cervical cancer risk level of 70% of the women was found below average, 22.1% average, and 7.9% above average. Only 52% of the women have had Pap smear test at least once in their lives. The percentage screening regularly in conformity with the national screening standard was 39.2%. Women in the 40-49 age group, were married, conformed significantly more (p<0.05) to the national screening standard. Compliance also increased with the level of education and decreased with the cervical cancer risk level (p<0.05). A logistic regression model was constructed including age, education level, menstruation state of the women and the economic level of the family. Not having the Pap smear test in conformity with the national cervical cancer screening standard in 35-39 age group was 2.52 times more than 40-49 age group, while it was 3.26 times more in 60-69 age group (p< 0.05). Not having Pap smear test in 35-39 age group more than other groups might result from lack of information on the cervical cancer national screening standard and the necessity of having Pap smear test. As for 60-69 age group, the low education level might cause not having Pap smear test. Under these circumstances, the cervical cancer risk levels should be determined and the individuals should be informed. Providing Pap smear test screening service to individuals in the target group of national screening standard, as a public service may resolve the inequalities due to age and educational differences.
NASA Astrophysics Data System (ADS)
Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.
2014-10-01
A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.
Towards a standard for the dynamic measurement of pressure based on laser absorption spectroscopy
Douglass, K O; Olson, D A
2016-01-01
We describe an approach for creating a standard for the dynamic measurement of pressure based on the measurement of fundamental quantum properties of molecular systems. From the linewidth and intensities of ro-vibrational transitions we plan on making an accurate determination of pressure and temperature. The goal is to achieve an absolute uncertainty for time-varying pressure of 5 % with a measurement rate of 100 kHz, which will in the future serve as a method for the traceable calibration of pressure sensors used in transient processes. To illustrate this concept we have used wavelength modulation spectroscopy (WMS), due to inherent advantages over direct absorption spectroscopy, to perform rapid measurements of carbon dioxide in order to determine the pressure. The system records the full lineshape profile of a single ro-vibrational transition of CO2 at a repetition rate of 4 kHz and with a systematic measurement uncertainty of 12 % for the linewidth measurement. A series of pressures were measured at a rate of 400 Hz (10 averages) and from these measurements the linewidth was determined with a relative uncertainty of about 0.5 % on average. The pressures measured using WMS have an average difference of 0.6 % from the absolute pressure measured with a capacitance diaphragm sensor. PMID:27881884
Enhanced Cumulative Sum Charts for Monitoring Process Dispersion
Abujiya, Mu’azu Ramat; Riaz, Muhammad; Lee, Muhammad Hisyam
2015-01-01
The cumulative sum (CUSUM) control chart is widely used in industry for the detection of small and moderate shifts in process location and dispersion. For efficient monitoring of process variability, we present several CUSUM control charts for monitoring changes in standard deviation of a normal process. The newly developed control charts based on well-structured sampling techniques - extreme ranked set sampling, extreme double ranked set sampling and double extreme ranked set sampling, have significantly enhanced CUSUM chart ability to detect a wide range of shifts in process variability. The relative performances of the proposed CUSUM scale charts are evaluated in terms of the average run length (ARL) and standard deviation of run length, for point shift in variability. Moreover, for overall performance, we implore the use of the average ratio ARL and average extra quadratic loss. A comparison of the proposed CUSUM control charts with the classical CUSUM R chart, the classical CUSUM S chart, the fast initial response (FIR) CUSUM R chart, the FIR CUSUM S chart, the ranked set sampling (RSS) based CUSUM R chart and the RSS based CUSUM S chart, among others, are presented. An illustrative example using real dataset is given to demonstrate the practicability of the application of the proposed schemes. PMID:25901356
Walorczyk, Stanisław; Drożdżyński, Dariusz; Kierzek, Roman
2015-01-01
A method was developed for pesticide analysis in samples of high chlorophyll content belonging to the group of minor crops. A new type of sorbent, known as ChloroFiltr, was employed for dispersive-solid phase extraction cleanup (dispersive-SPE) to reduce the unwanted matrix background prior to concurrent analysis by gas chromatography and ultra-performance liquid chromatography coupled to tandem quadrupole mass spectrometry (GC-MS/MS and UPLC-MS/MS). Validation experiments were carried out on green, unripe plants of lupin, white mustard and sorghum. The overall recoveries at the three spiking levels of 0.01, 0.05 and 0.5 mg kg(-1) fell in the range between 68 and 120% (98% on average) and 72-104% (93% on average) with relative standard deviation (RSD) values between 2 and 19% (7% on average) and 3-16% (6% on average) by GC-MS/MS and UPLC-MS/MS technique, respectively. Because of strong enhancement or suppression matrix effects (absolute values >20%) which were exhibited by about 80% of the pesticide and matrix combinations, acceptably accurate quantification was achieved by using matrix-matched standards. Up to now, the proposed method has been successfully used to study the dissipation patterns of pesticides after application on lupin, white mustard, soya bean, sunflower and field bean in experimental plot trials conducted in Poland. Copyright © 2014 Elsevier B.V. All rights reserved.
Western Australian students' alcohol consumption and expenditure intentions for Schoolies.
Jongenelis, Michelle I; Pettigrew, Simone; Biagioni, Nicole; Hagger, Martin S
2017-07-01
In Australia, the immediate post-school period (known as 'Schoolies') is associated with heavy drinking and high levels of alcohol-related harm. This study investigated students' intended alcohol consumption during Schoolies to inform interventions to reduce alcohol-related harm among this group. An online survey was administered to students in their senior year of schooling. Included items related to intended daily alcohol consumption during Schoolies, amount of money intended to be spent on alcohol over the Schoolies period, and past drinking behaviour. On average, participants (n=187) anticipated that they would consume eight standard drinks per day, which is substantially higher than the recommended maximum of no more than four drinks on a single occasion. Participants intended to spend an average of A$131 on alcohol over the Schoolies period. Although higher than national guidelines, intended alcohol consumption was considerably lower than has been previously documented during Schoolies events. The substantial amounts of money expected to be spent during Schoolies suggest this group has adequate spending power to constitute an attractive target market for those offering alternative activities that are associated with lower levels of alcohol-related harm.
Dehghani, Mansooreh; Anushiravani, Amir; Hashemi, Hassan; Shamsedini, Narges
2014-06-01
Expanding cities with rapid economic development has resulted in increased energy consumption leading to numerous environmental problems for their residents. The aim of this study was to investigate the correlation between air pollution and mortality rate due to cardiovascular and respiratory diseases in Shiraz. This is an analytical cross-sectional study in which the correlation between major air pollutants (including carbon monoxide [CO], sulfur dioxide [SO2], nitrogen dioxide [NO2] and particle matter with a diameter of less than 10 μ [PM10]) and climatic parameters (temperature and relative humidity) with the number of those whom expired from cardiopulmonary disease in Shiraz from March 2011 to January 2012 was investigated. Data regarding the concentration of air pollutants were determined by Shiraz Environmental Organization. Information about climatic parameters was collected from the database of Iran's Meteorological Organization. The number of those expired from cardiopulmonary disease in Shiraz were provided by the Department of Health, Shiraz University of Medical Sciences. We used non-parametric correlation test to analyze the relationship between these parameters. The results demonstrated that in all the recorded data, the average monthly pollutants standard index (PSI) values of PM10 were higher than standard limits, while the average monthly PSI value of NO2 were lower than standard. There was no significant relationship between the number of those expired from cardiopulmonary disease and the air pollutant (P > 0.05). Air pollution can aggravate chronic cardiopulmonary disease. In the current study, one of the most important air pollutants in Shiraz was the PM10 component. Mechanical processes, such as wind blowing from neighboring countries, is the most important parameter increasing PM10 in Shiraz to alarming conditions. The average monthly variation in PSI values of air pollutants such as NO2, CO, and SO2 were lower than standard limits. Moreover, there was no significant correlation between the average monthly variation in PSI of NO2, CO, PM10, and SO2 and the number of those expired from cardiopulmonary disease in Shiraz.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-21
...: Kenneth R. Katz, Fuel Economy Division, Office of International Policy, Fuel Economy and Consumer Programs... Parts 531 and 533 Passenger Car Average Fuel Economy Standards--Model Years 2016-2025; Light Truck Average Fuel Economy Standards--Model Years 2016-2025; Production Plan Data. OMB Control Number: 2127-0655...
40 CFR 464.25 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) EFFLUENT GUIDELINES AND STANDARDS METAL MOLDING AND CASTING POINT SOURCE CATEGORY Copper Casting... Maximum for monthly average kg/1000 kkg (pounds per million pounds) of metal poured Copper (T) 0.0307 0... any 1 day Maximum for monthly average kg/1,000 kkg (pounds per million pounds) of metal poured Copper...
A Robust Interpretation of Teaching Evaluation Ratings
ERIC Educational Resources Information Center
Bi, Henry H.
2018-01-01
There are no absolute standards regarding what teaching evaluation ratings are satisfactory. It is also problematic to compare teaching evaluation ratings with the average or with a cutoff number to determine whether they are adequate. In this paper, we use average and standard deviation charts (X[overbar]-S charts), which are based on the theory…
76 FR 28998 - Implementation of Revised Passenger Weight Standards for Existing Passenger Vessels
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
... Inspection prior to a change in the assumed average weight per person standard that will become effective in... several factors, including the total weight of people carried based on an Assumed Average Weight per... reason, the policy letter referred to in this notice provides supplemental guidance to the implementation...
Drivers' biased perceptions of speed and safety campaign messages.
Walton, D; McKeown, P C
2001-09-01
One hundred and thirteen drivers were surveyed for their perceptions of driving speed to compare self-reported average speed, perceived average-other speed and the actual average speed, in two conditions (50 and 100 kph zones). These contrasts were used to evaluate whether public safety messages concerning speeding effectively reach their target audience. Evidence is presented supporting the hypothesis that drivers who have a biased perception of their own speed relative to others are more likely to ignore advertising campaigns encouraging people not to speed. A method of self-other-actual comparisons detects biased perceptions when the standard method of self-other comparison does not. In particular, drivers exaggerate the perceived speed of others and this fact is masked using traditional methods. The method of manipulation is proposed as a way to evaluate the effect of future advertising campaigns, and a strategy for such campaigns is proposed based on the results of the self-other comparisons.
Electron heating at interplanetary shocks
NASA Technical Reports Server (NTRS)
Feldman, W. C.; Asbridge, J. R.; Bame, S. J.; Gosling, J. T.; Zwickl, R. D.
1982-01-01
Data for 41 forward interplanetary shocks show that the ratio of downstream to upstream electron temperatures, T/sub e/(d/u) is variable in the range between 1.0 (isothermal) and 3.0. On average, (T/sub e/(d/u) = 1.5 with a standard deviation, sigma e = 0.5. This ratio is less than the average ratio of proton temperatures across the same shocks, (T/sub p/(d/u)) = 3.3 with sigma p = 2.5 as well as the average ratio of electron temperatures across the Earth's bow shock. Individual samples of T/sub e/(d/u) and T/sub p/(d/u) appear to be weakly correlated with the number density ratio. However the amounts of electron and proton heating are well correlated with each other as well as with the bulk velocity difference across each shock. The stronger shocks appear to heat the protons relatively more efficiently than they heat the electrons.
Child wellbeing and income inequality in rich societies: ecological cross sectional study.
Pickett, Kate E; Wilkinson, Richard G
2007-11-24
To examine associations between child wellbeing and material living standards (average income), the scale of differentiation in social status (income inequality), and social exclusion (children in relative poverty) in rich developed societies. Ecological, cross sectional studies. Cross national comparisons of 23 rich countries; cross state comparisons within the United States. Children and young people. The Unicef index of child wellbeing and its components for rich countries; eight comparable measures for the US states and District of Columbia (teenage births, juvenile homicides, infant mortality, low birth weight, educational performance, dropping out of high school, overweight, mental health problems). The overall index of child wellbeing was negatively correlated with income inequality (r=-0.64, P=0.001) and percentage of children in relative poverty (r=-0.67, P=0.001) but not with average income (r=0.15, P=0.50). Many more indicators of child wellbeing were associated with income inequality or children in relative poverty, or both, than with average incomes. Among the US states and District of Columbia all indicators were significantly worse in more unequal states. Only teenage birth rates and the proportion of children dropping out of high school were lower in richer states. Improvements in child wellbeing in rich societies may depend more on reductions in inequality than on further economic growth.
Bertsche, Patricia K; Mensah, Edward; Stevens, Thomas
2006-08-01
The purpose of this study was to determine whether the benefits of early identification of work-related noise-induced hearing loss outweigh the costs of complying with a Global Noise Medical Surveillance Procedure of a large corporation. Hearing is fundamental to language, communication, and socialization. Its loss is a common cause of disability, affecting an estimated 20 to 40 million individuals in the United States (Daniell et al., 1998). NIOSH reported that approximately 30 million U.S. workers are exposed to noise on the job and that noise-induced hearing loss is one of the most common occupational diseases. It is irreversible (NIOSH, 2004). The average cost of a noise-induced hearing loss is reported to range from dollars 4,726 to dollars 25,500. Corporate history indicates a range of dollars 44 to dollars 20,157 per case. During this 4-year study in one plant, the average annual cost of complying with the Global Noise Medical Surveillance Procedure was dollars 19,509 to screen an average of 390 employees, or dollars 50 per worker. The study identified 11 non-work-related standard threshold shifts. All cases were referred for appropriate early intervention. Given the results, this hearing health program is considered beneficial to the corporation for both work- and non-work-related reasons.
Music therapy for people with schizophrenia and schizophrenia-like disorders.
Mössler, Karin; Chen, Xijing; Heldal, Tor Olav; Gold, Christian
2011-12-07
Music therapy is a therapeutic method that uses musical interaction as a means of communication and expression. The aim of the therapy is to help people with serious mental disorders to develop relationships and to address issues they may not be able to using words alone. To review the effects of music therapy, or music therapy added to standard care, compared with 'placebo' therapy, standard care or no treatment for people with serious mental disorders such as schizophrenia. We searched the Cochrane Schizophrenia Group Trials Register (December 2010) and supplemented this by contacting relevant study authors, handsearching of music therapy journals and manual searches of reference lists. All randomised controlled trials (RCTs) that compared music therapy with standard care, placebo therapy, or no treatment. Studies were reliably selected, quality assessed and data extracted. We excluded data where more than 30% of participants in any group were lost to follow-up. We synthesised non-skewed continuous endpoint data from valid scales using a standardised mean difference (SMD). If statistical heterogeneity was found, we examined treatment 'dosage' and treatment approach as possible sources of heterogeneity. We included eight studies (total 483 participants). These examined effects of music therapy over the short- to medium-term (one to four months), with treatment 'dosage' varying from seven to 78 sessions. Music therapy added to standard care was superior to standard care for global state (medium-term, 1 RCT, n = 72, RR 0.10 95% CI 0.03 to 0.31, NNT 2 95% CI 1.2 to 2.2). Continuous data identified good effects on negative symptoms (4 RCTs, n = 240, SMD average endpoint Scale for the Assessment of Negative Symptoms (SANS) -0.74 95% CI -1.00 to -0.47); general mental state (1 RCT, n = 69, SMD average endpoint Positive and Negative Symptoms Scale (PANSS) -0.36 95% CI -0.85 to 0.12; 2 RCTs, n=100, SMD average endpoint Brief Psychiatric Rating Scale (BPRS) -0.73 95% CI -1.16 to -0.31); depression (2 RCTs, n = 90, SMD average endpoint Self-Rating Depression Scale (SDS) -0.63 95% CI -1.06 to -0.21; 1 RCT, n = 30, SMD average endpoint Hamilton Depression Scale (Ham-D) -0.52 95% CI -1.25 to -0.21 ); and anxiety (1 RCT, n = 60, SMD average endpoint SAS -0.61 95% CI -1.13 to -0.09). Positive effects were also found for social functioning (1 RCT, n = 70, SMD average endpoint Social Disability Schedule for Inpatients (SDSI) score -0.78 95% CI -1.27 to -0.28). Furthermore, some aspects of cognitive functioning and behaviour seem to develop positively through music therapy. Effects, however, were inconsistent across studies and depended on the number of music therapy sessions as well as the quality of the music therapy provided. Music therapy as an addition to standard care helps people with schizophrenia to improve their global state, mental state (including negative symptoms) and social functioning if a sufficient number of music therapy sessions are provided by qualified music therapists. Further research should especially address the long-term effects of music therapy, dose-response relationships, as well as the relevance of outcomes measures in relation to music therapy.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Ar-40/Ar-39 Studies of Martian Meteorite RBT 04262 and Terrestrial Standards
NASA Technical Reports Server (NTRS)
Park, J.; Herzog, G. F.; Turrin, B.; Lindsay, F. N.; Delaney, J. S.; Swisher, C. C., III; Nagao, K.; Nyquist, L. E.
2014-01-01
Park et al. recently presented an Ar-40/Ar-39 dating study of maskelynite separated from the Martian meteorite RBT 04262. Here we report an additional study of Ar-40/Ar-39 patterns for smaller samples, each consisting of only a few maskelynite grains. Considered as a material for Ar-40/Ar-39 dating, the shock-produced glass maskelynite has both an important strength (relatively high K concentration compared to other mineral phases) and some potentially problematic weaknesses. At Rutgers, we have been analyzing small grains consisting of a single phase to explore local effects that might be averaged and remain hidden in larger samples. Thus, to assess the homogeneity of the RBT maskelynite and for comparison with the results of, we analyzed six approx. 30 microgram samples of the same maskelynite separate they studied. Furthermore, because most Ar-40/Ar-39 are calculated relative to the age of a standard, we present new Ar-40/Ar-39 age data for six standards. Among the most widely used standards are sanidine from Fish Canyon (FCs) and various hornblendes (hb3gr, MMhb-1, NL- 25), which are taken as primary standards because their ages have been determined by independent, direct measurements of K and A-40.
Estimating Adolescent Risk for Hearing Loss Based on Data From a Large School-Based Survey
Verschuure, Hans; van der Ploeg, Catharina P. B.; Brug, Johannes; Raat, Hein
2010-01-01
Objectives. We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. Methods. In 2007, 1512 adolescents (aged 12–19 years) in Dutch secondary schools completed questionnaires about their music-listening behavior and whether they experienced hearing-related symptoms after listening to high-volume music. We used their self-reported data in conjunction with published average sound levels of music players, discotheques, and pop concerts to estimate their noise exposure, and we compared that exposure to our own “loosened” (i.e., less strict) version of current European safety standards for occupational noise exposure. Results. About half of the adolescents exceeded safety standards for occupational noise exposure. About one third of the respondents exceeded safety standards solely as a result of listening to MP3 players. Hearing symptoms that occurred after using an MP3 player or going to a discotheque were associated with exposure to high-volume music. Conclusions. Adolescents often exceeded current occupational safety standards for noise exposure, highlighting the need for specific safety standards for leisure-time noise exposure. PMID:20395587
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
Kado, DM; Huang, MH; Karlamangla, AS; Cawthon, P; Katzman, W; Hillier, TA; Ensrud, K; Cummings, SR
2012-01-01
Age-related hyperkyphosis is thought to be a result of underlying vertebral fractures, but studies suggest that among the most hyperkyphotic women, only one in three have underlying radiographic vertebral fractures. Although commonly observed, there is no widely accepted definition of hyperkyphosis in older persons, and other than vertebral fracture, no major causes have been identified. To identify important correlates of kyphosis and risk factors for its progression over time, we conducted a 15 year retrospective cohort study of 1,196 women, aged 65 years and older at baseline (1986–88), from four communities across the United States: Baltimore County, MD; Minneapolis, MN, Portland, Oregon, and the Monongahela Valley, PA. Cobb angle kyphosis was measured from radiographs obtained at baseline and an average of 3.7 and 15 years later. Repeated measures, mixed effects analyses were performed. At baseline, the mean kyphosis angle was 44.7 degrees (standard error 0.4, standard deviation 11.9) and significant correlates included a family history of hyperkyphosis, prevalent vertebral fracture, low bone mineral density, greater body weight, degenerative disc disease, and smoking. Over an average of 15 years, the mean increase in kyphosis was 7.1 degrees (standard error 0.25). Independent determinants of greater kyphosis progression were prevalent and incident vertebral fractures, low bone mineral density and concurrent bone density loss, low body weight, and concurrent weight loss. Thus, age-related kyphosis progression may be best prevented by slowing bone density loss and avoiding weight loss. PMID:22865329
Polinder, S; Heijnen, E M E W; Macklon, N S; Habbema, J D F; Fauser, B J C M; Eijkemans, M J C
2008-02-01
BACKGROUND Conventional ovarian stimulation and the transfer of two embryos in IVF exhibits an inherent high probability of multiple pregnancies, resulting in high costs. We evaluated the cost-effectiveness of a mild compared with a conventional strategy for IVF. METHODS Four hundred and four patients were randomly assigned to undergo either mild ovarian stimulation/GnRH antagonist co-treatment combined with single embryo transfer, or standard stimulation/GnRH agonist long protocol and the transfer of two embryos. The main outcome measures are total costs of treatment within a 12 months period after randomization, and the relationship between total costs and proportion of cumulative pregnancies resulting in term live birth within 1 year of randomization. RESULTS Despite a significantly increased average number of IVF cycles (2.3 versus 1.7; P < 0.001), lower average total costs over a 12-month period (8333 versus euro10 745; P = 0.006) were observed using the mild strategy. This was mainly due to higher costs of the obstetric and post-natal period for the standard strategy, related to multiple pregnancies. The costs per pregnancy leading to term live birth were euro19 156 in the mild strategy and euro24 038 in the standard. The incremental cost-effectiveness ratio of the standard strategy compared with the mild strategy was euro185 000 per extra pregnancy leading to term live birth. CONCLUSIONS Despite an increased mean number of IVF cycles within 1 year, from an economic perspective, the mild treatment strategy is more advantageous per term live birth. It is unlikely, over a wide range of society's willingness-to-pay, that the standard treatment strategy is cost-effective, compared with the mild strategy.
Performance analysis of deciduous morphology for detecting biological siblings.
Paul, Kathleen S; Stojanowski, Christopher M
2015-08-01
Family-centered burial practices influence cemetery structure and can represent social group composition in both modern and ancient contexts. In ancient sites dental phenotypic data are often used as proxies for underlying genotypes to identify potential biological relatives. Here, we test the performance of deciduous dental morphological traits for differentiating sibling pairs from unrelated individuals from the same population. We collected 46 deciduous morphological traits for 69 sibling pairs from the Burlington Growth Centre's long term Family Study. Deciduous crown features were recorded following published standards. After variable winnowing, inter-individual Euclidean distances were generated using 20 morphological traits. To determine whether sibling pairs are more phenotypically similar than expected by chance we used bootstrap resampling of distances to generate P values. Multidimensional scaling (MDS) plots were used to evaluate the degree of clustering among sibling pairs. Results indicate an average distance between siblings of 0.252, which is significantly less than 9,999 replicated averages of 69 resampled pseudo-distances generated from: 1) a sample of non-relative pairs (P < 0.001), and 2) a sample of relative and non-relative pairs (P < 0.001). MDS plots indicate moderate to strong clustering among siblings; families occupied 3.83% of the multidimensional space on average (versus 63.10% for the total sample). Deciduous crown morphology performed well in identifying related sibling pairs. However, there was considerable variation in the extent to which different families exhibited similarly low levels of phenotypic divergence. © 2015 Wiley Periodicals, Inc.
Impact of socioeconomic adjustment on physicians' relative cost of care.
Timbie, Justin W; Hussey, Peter S; Adams, John L; Ruder, Teague W; Mehrotra, Ateev
2013-05-01
Ongoing efforts to profile physicians on their relative cost of care have been criticized because they do not account for differences in patients' socioeconomic status (SES). The importance of SES adjustment has not been explored in cost-profiling applications that measure costs using an episode of care framework. We assessed the relationship between SES and episode costs and the impact of adjusting for SES on physicians' relative cost rankings. We analyzed claims submitted to 3 Massachusetts commercial health plans during calendar years 2004 and 2005. We grouped patients' care into episodes, attributed episodes to individual physicians, and standardized costs for price differences across plans. We accounted for differences in physicians' case mix using indicators for episode type and a patient's severity of illness. A patient's SES was measured using an index of 6 indicators based on the zip code in which the patient lived. We estimated each physician's case mix-adjusted average episode cost and percentile rankings with and without adjustment for SES. Patients in the lowest SES quintile had $80 higher unadjusted episode costs, on average, than patients in the highest quintile. Nearly 70% of the variation in a physician's average episode cost was explained by case mix of their patients, whereas the contribution of SES was negligible. After adjustment for SES, only 1.1% of physicians changed relative cost rankings >2 percentiles. Accounting for patients' SES has little impact on physicians' relative cost rankings within an episode cost framework.
Biochemical thermodynamics: applications of Mathematica.
Alberty, Robert A
2006-01-01
The most efficient way to store thermodynamic data on enzyme-catalyzed reactions is to use matrices of species properties. Since equilibrium in enzyme-catalyzed reactions is reached at specified pH values, the thermodynamics of the reactions is discussed in terms of transformed thermodynamic properties. These transformed thermodynamic properties are complicated functions of temperature, pH, and ionic strength that can be calculated from the matrices of species values. The most important of these transformed thermodynamic properties is the standard transformed Gibbs energy of formation of a reactant (sum of species). It is the most important because when this function of temperature, pH, and ionic strength is known, all the other standard transformed properties can be calculated by taking partial derivatives. The species database in this package contains data matrices for 199 reactants. For 94 of these reactants, standard enthalpies of formation of species are known, and so standard transformed Gibbs energies, standard transformed enthalpies, standard transformed entropies, and average numbers of hydrogen atoms can be calculated as functions of temperature, pH, and ionic strength. For reactions between these 94 reactants, the changes in these properties can be calculated over a range of temperatures, pHs, and ionic strengths, and so can apparent equilibrium constants. For the other 105 reactants, only standard transformed Gibbs energies of formation and average numbers of hydrogen atoms at 298.15 K can be calculated. The loading of this package provides functions of pH and ionic strength at 298.15 K for standard transformed Gibbs energies of formation and average numbers of hydrogen atoms for 199 reactants. It also provides functions of temperature, pH, and ionic strength for the standard transformed Gibbs energies of formation, standard transformed enthalpies of formation, standard transformed entropies of formation, and average numbers of hydrogen atoms for 94 reactants. Thus loading this package makes available 774 mathematical functions for these properties. These functions can be added and subtracted to obtain changes in these properties in biochemical reactions and apparent equilibrium constants.
Health-Related Benefits of Attaining the 8-Hr Ozone Standard
Hubbell, Bryan J.; Hallberg, Aaron; McCubbin, Donald R.; Post, Ellen
2005-01-01
During the 2000–2002 time period, between 36 and 56% of ozone monitors each year in the United States failed to meet the current ozone standard of 80 ppb for the fourth highest maximum 8-hr ozone concentration. We estimated the health benefits of attaining the ozone standard at these monitors using the U.S. Environmental Protection Agency’s Environmental Benefits Mapping and Analysis Program. We used health impact functions based on published epidemiologic studies, and valuation functions derived from the economics literature. The estimated health benefits for 2000 and 2001 are similar in magnitude, whereas the results for 2002 are roughly twice that of each of the prior 2 years. The simple average of health impacts across the 3 years includes reductions of 800 premature deaths, 4,500 hospital and emergency department admissions, 900,000 school absences, and > 1 million minor restricted activity days. The simple average of benefits (including premature mortality) across the 3 years is $5.7 billion [90% confidence interval (CI), 0.6–15.0] for the quadratic rollback simulation method and $4.9 billion (90% CI, 0.5–14.0) for the proportional rollback simulation method. Results are sensitive to the form of the standard and to assumptions about background ozone levels. If the form of the standard is based on the first highest maximum 8-hr concentration, impacts are increased by a factor of 2–3. Increasing the assumed hourly background from zero to 40 ppb reduced impacts by 30 and 60% for the proportional and quadratic attainment simulation methods, respectively. PMID:15626651
NASA Technical Reports Server (NTRS)
1974-01-01
The standard plate cells exhibited higher average end-of-charge (EOC) voltages than the cells with teflonated negative plates; they also delivered a higher capacity output in ampere hours following these charges. All the cells reached a pressure of 20 psia before reaching the voltage limit of 1.550 volts during the pressure versus capacity test. The average ampere hours in and voltages at this pressure were 33.6 and 1.505 volts respectively for the teflonated negative plate cells and 35.5 and 1.523 volts for the standard plate cells. All cells exhibited pressure decay in the range of 1 to 7 psia during the last 30 minutes of the 1-hour open circuit stand. Average capacity out for the teflonated and standard negative plate cells was 29.4 and 29.9 ampere hours respectively.
Hendel, Michael D; Bryan, Jason A; Barsoum, Wael K; Rodriguez, Eric J; Brems, John J; Evans, Peter J; Iannotti, Joseph P
2012-12-05
Glenoid component malposition for anatomic shoulder replacement may result in complications. The purpose of this study was to define the efficacy of a new surgical method to place the glenoid component. Thirty-one patients were randomized for glenoid component placement with use of either novel three-dimensional computed tomographic scan planning software combined with patient-specific instrumentation (the glenoid positioning system group), or conventional computed tomographic scan, preoperative planning, and surgical technique, utilizing instruments provided by the implant manufacturer (the standard surgical group). The desired position of the component was determined preoperatively. Postoperatively, a computed tomographic scan was used to define and compare the actual implant location with the preoperative plan. In the standard surgical group, the average preoperative glenoid retroversion was -11.3° (range, -39° to 17°). In the glenoid positioning system group, the average glenoid retroversion was -14.8° (range, -27° to 7°). When the standard surgical group was compared with the glenoid positioning system group, patient-specific instrumentation technology significantly decreased (p < 0.05) the average deviation of implant position for inclination and medial-lateral offset. Overall, the average deviation in version was 6.9° in the standard surgical group and 4.3° in the glenoid positioning system group. The average deviation in inclination was 11.6° in the standard surgical group and 2.9° in the glenoid positioning system group. The greatest benefit of patient-specific instrumentation was observed in patients with retroversion in excess of 16°; the average deviation was 10° in the standard surgical group and 1.2° in the glenoid positioning system group (p < 0.001). Preoperative planning and patient-specific instrumentation use resulted in a significant improvement in the selection and use of the optimal type of implant and a significant reduction in the frequency of malpositioned glenoid implants. Novel three-dimensional preoperative planning, coupled with patient and implant-specific instrumentation, allows the surgeon to better define the preoperative pathology, select the optimal implant design and location, and then accurately execute the plan at the time of surgery.
March, Rod S.
2003-01-01
The 1996 measured winter snow, maximum winter snow, net, and annual balances in the Gulkana Glacier Basin were evaluated on the basis of meteorological, hydrological, and glaciological data. Averaged over the glacier, the measured winter snow balance was 0.87 meter on April 18, 1996, 1.1 standard deviation below the long-term average; the maximum winter snow balance, 1.06 meters, was reached on May 28, 1996; and the net balance (from August 30, 1995, to August 24, 1996) was -0.53 meter, 0.53 standard deviation below the long-term average. The annual balance (October 1, 1995, to September 30, 1996) was -0.37 meter. Area-averaged balances were reported using both the 1967 and 1993 area altitude distributions (the numbers previously given in this abstract use the 1993 area altitude distribution). Net balance was about 25 percent less negative using the 1993 area altitude distribution than the 1967 distribution. Annual average air temperature was 0.9 degree Celsius warmer than that recorded with the analog sensor used since 1966. Total precipitation catch for the year was 0.78 meter, 0.8 standard deviations below normal. The annual average wind speed was 3.5 meters per second in the first year of measuring wind speed. Annual runoff averaged 1.50 meters over the basin, 1.0 standard deviation below the long-term average. Glacier-surface altitude and ice-motion changes measured at three index sites document seasonal ice-speed and glacier-thickness changes. Both showed a continuation of a slowing and thinning trend present in the 1990s. The glacier terminus and lower ablation area were defined for 1996 with a handheld Global Positioning System survey of 126 locations spread out over about 4 kilometers on the lower glacier margin. From 1949 to 1996, the terminus retreated about 1,650 meters for an average retreat rate of 35 meters per year.
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable recordkeeping...
40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading program...
40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading program...
40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading program...
40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable recordkeeping...
40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable recordkeeping...
40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable recordkeeping...
40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading program...
40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable recordkeeping...
40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading program...
Langevin equation with fluctuating diffusivity: A two-state model
NASA Astrophysics Data System (ADS)
Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji
2016-07-01
Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-01-01
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-03-19
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.
41 CFR 102-34.55 - Are there fleet average fuel economy standards we must meet?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Are there fleet average fuel economy standards we must meet? 102-34.55 Section 102-34.55 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION PERSONAL PROPERTY 34-MOTOR VEHICLE MANAGEMENT Obtainin...
Ground-level Ozone (Smog) Information | New England | US ...
2017-09-05
Ground-level ozone presents a serious air quality problem in New England. In 2008, EPA revised the ozone standard to a level of 0.075 parts per million, 8-hour average. Over the last 5 years (2006 through 2010), there have been an average of 31 days per summer when New England's air exceeded this standard.
40 CFR 406.16 - Pretreatment standards for new sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES AND STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Corn Wet Milling Subcategory § 406.16 Pretreatment... new corn wet milling source to be discharged to the POTW (gallons per one hour for flow and pounds per day for BOD5 and TSS). Q = average existing waste load to POTW. R = average waste load for the new...
40 CFR 406.16 - Pretreatment standards for new sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Corn Wet Milling Subcategory § 406.16 Pretreatment... new corn wet milling source to be discharged to the POTW (gallons per one hour for flow and pounds per day for BOD5 and TSS). Q = average existing waste load to POTW. R = average waste load for the new...
40 CFR 406.16 - Pretreatment standards for new sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES AND STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Corn Wet Milling Subcategory § 406.16 Pretreatment... new corn wet milling source to be discharged to the POTW (gallons per one hour for flow and pounds per day for BOD5 and TSS). Q = average existing waste load to POTW. R = average waste load for the new...
ERIC Educational Resources Information Center
Steenman, Sebastiaan C.; Bakker, Wieger E.; van Tartwijk, Jan W. F.
2016-01-01
The first-year grade point average (FYGPA) is the predominant measure of student success in most studies on university admission. Previous cognitive achievements measured with high school grades or standardized tests have been found to be the strongest predictors of FYGPA. For this reason, standardized tests measuring cognitive achievement are…
Specification for a standard radar sea clutter model
NASA Astrophysics Data System (ADS)
Paulus, Richard A.
1990-09-01
A model for the average sea clutter radar cross section is proposed for the Oceanographic and Atmospheric Master Library. This model is a function of wind speed (or sea state), wind direction relative to the antenna, refractive conditions, radar antenna height, frequency, polarization, horizontal beamwidth, and compressed pulse length. The model is fully described, a FORTRAN 77 computer listing is provided, and test cases are given to demonstrate the proper operation of the program.
Environmental Assessment: Construct New Pavilion Playground at Grand Forks AFB, North Dakota
2003-07-12
Mean Sea Level National Ambient Air Quality Standards Native American Graves Protection and Repatriation Act North Dakota North Dakota National...cottonwood, and green ash. Dutch elm disease has killed many of the elms. European buckthorn (a highly invasive exotic species), chokecherry, and...foot. Land at the base is relatively flat, with elevations ranging from 880 to 920 feet mean sea level (MSL) and averaging about 890 feet MSL. The land
Traffic-Related Air Pollution, Blood Pressure, and Adaptive Response of Mitochondrial Abundance.
Zhong, Jia; Cayir, Akin; Trevisi, Letizia; Sanchez-Guerra, Marco; Lin, Xinyi; Peng, Cheng; Bind, Marie-Abèle; Prada, Diddier; Laue, Hannah; Brennan, Kasey J M; Dereix, Alexandra; Sparrow, David; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A
2016-01-26
Exposure to black carbon (BC), a tracer of vehicular-traffic pollution, is associated with increased blood pressure (BP). Identifying biological factors that attenuate BC effects on BP can inform prevention. We evaluated the role of mitochondrial abundance, an adaptive mechanism compensating for cellular-redox imbalance, in the BC-BP relationship. At ≥ 1 visits among 675 older men from the Normative Aging Study (observations=1252), we assessed daily BP and ambient BC levels from a stationary monitor. To determine blood mitochondrial abundance, we used whole blood to analyze mitochondrial-to-nuclear DNA ratio (mtDNA/nDNA) using quantitative polymerase chain reaction. Every standard deviation increase in the 28-day BC moving average was associated with 1.97 mm Hg (95% confidence interval [CI], 1.23-2.72; P<0.0001) and 3.46 mm Hg (95% CI, 2.06-4.87; P<0.0001) higher diastolic and systolic BP, respectively. Positive BC-BP associations existed throughout all time windows. BC moving averages (5-day to 28-day) were associated with increased mtDNA/nDNA; every standard deviation increase in 28-day BC moving average was associated with 0.12 standard deviation (95% CI, 0.03-0.20; P=0.007) higher mtDNA/nDNA. High mtDNA/nDNA significantly attenuated the BC-systolic BP association throughout all time windows. The estimated effect of 28-day BC moving average on systolic BP was 1.95-fold larger for individuals at the lowest mtDNA/nDNA quartile midpoint (4.68 mm Hg; 95% CI, 3.03-6.33; P<0.0001), in comparison with the top quartile midpoint (2.40 mm Hg; 95% CI, 0.81-3.99; P=0.003). In older adults, short-term to moderate-term ambient BC levels were associated with increased BP and blood mitochondrial abundance. Our findings indicate that increased blood mitochondrial abundance is a compensatory response and attenuates the cardiac effects of BC. © 2015 American Heart Association, Inc.
77 FR 72766 - Small Business Size Standards: Support Activities for Mining
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-06
... its entirety for parties who have an interest in SBA's overall approach to establishing, evaluating....gov , Docket ID: SBA-2009- 0008. SBA continues to welcome comments on its methodology from interested.... Average firm size. SBA computes two measures of average firm size: simple average and weighted average...
NASA Astrophysics Data System (ADS)
Kim, Byung Chan; Park, Seong-Ook
In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Indoor air quality in an automotive assembly plant in Selangor, Malaysia.
Edimansyah, B A; Rusli, B N; Naing, L; Azwan, B A; Aziah, B D
2009-01-01
The purpose of this study was to determine the indoor air quality (IAQ) status of an automotive assembly plant in Rawang, Selangor, Malaysia using selected IAQ parameters, such as carbon dioxide (CO2), carbon monoxide (CO), temperature, relative humidity (RH) and respirable particulate matter (PM10). A cross-sectional study was conducted in the paint shop and body shop sections of the plant in March 2005. The Q-TRAK Plus IAQ Monitor was used to record the patterns of CO, CO2, RH and temperature; whilst PM10 was measured using DUSTTRAK Aerosol Monitor over an 8-hour time weight average (8-TWA). It was found that the average temperatures, RH and PM10 in the paint shop section and body shop sections exceeded the Department of Safety and Health (DOSH) standards. The average concentrations of RH and CO were slightly higher in the body shop section than in the paint shop section, while the average concentrations of temperature and CO2 were slightly higher in the paint shop section than in the body shop section. There was no difference in the average concentrations of PM10 between the two sections.
Lowenthal, Mark S; Yen, James; Bunk, David M; Phinney, Karen W
2010-05-01
An isotope-dilution liquid chromatography-tandem mass spectrometry (ID LC-MS/MS) measurement procedure was developed to accurately quantify amino acid concentrations in National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 2389a-amino acids in 0.1 mol/L hydrochloric acid. Seventeen amino acids were quantified using selected reaction monitoring on a triple quadrupole mass spectrometer. LC-MS/MS results were compared to gravimetric measurements from the preparation of SRM 2389a-a reference material developed at NIST and intended for use in intra-laboratory calibrations and quality control. Quantitative mass spectrometry results and gravimetric values were statistically combined into NIST-certified mass fraction values with associated uncertainty estimates. Coefficients of variation (CV) for the repeatability of the LC-MS/MS measurements among amino acids ranged from 0.33% to 2.7% with an average CV of 1.2%. Average relative expanded uncertainty of the certified values including Types A and B uncertainties was 3.5%. Mean accuracy of the LC-MS/MS measurements with gravimetric preparation values agreed to within |1.1|% for all amino acids. NIST SRM 2389a will be available for characterization of routine methods for amino acid analysis and serves as a standard for higher-order measurement traceability. This is the first time an ID LC-MS/MS methodology has been applied for quantifying amino acids in a NIST SRM material.
Espinoza, Manuel; Santorelli, Gillian; Delgado, Iris
2015-01-01
Objective Chile, a South American country recently defined as a high-income nation, carried out a major healthcare system reform from 2005 onwards that aimed at reducing socioeconomic inequality in health. This study aimed to estimate income-related inequality in self-reported health status (SRHS) in 2000 and 2013, before and after the reform, for the entire adult Chilean population. Methods Using data on equivalized household income and adult SRHS from the 2000 and 2013 CASEN surveys (independent samples of 101 046 and 172 330 adult participants, respectively) we estimated Erreygers concentration indices (CIs) for above average SRHS for both years. We also decomposed the contribution of both “legitimate” standardizing variables (age and sex) and “illegitimate” variables (income, education, occupation, ethnicity, urban/rural, marital status, number of people living in the household, and healthcare entitlement). Results There was a significant concentration of above average SRHS favoring richer people in Chile in both years, which was less pronounced in 2013 than 2000 (Erreygers corrected CI 0.165 [Standard Error, SE 0.007] in 2000 and 0.047 [SE 0.008] in 2013). To help interpret the magnitude of this decline, adults in the richest fifth of households were 33% more likely than those in the poorest fifth to report above-average health in 2000, falling to 11% in 2013. In 2013, the contribution of illegitimate factors to income-related inequality in SRHS remained higher than the contribution of legitimate factors. Conclusions Income-related inequality in SRHS in Chile has fallen after the equity-based healthcare reform. Further research is needed to ascertain how far this fall in health inequality can be attributed to the 2005 healthcare reform as opposed to economic growth and other determinants of health that changed during the period. PMID:26418354
Kalsi, Harpoonam J; Wang, Yon Jon; Bavisha, Kalpesh; Bartlett, David
2010-03-01
The average number of visits for the construction of metal-based and acrylic dentures by junior hospital staff was 10 visits. Our hypothesis was that supervision would optimise the number of visits and reduce any need for remakes. The first audit cycle was retrospective and included all patients treated by SHOs in the Prosthodontics Department. The standard of care was compared to the British Society for the Study of Prosthetic Dentistry. The re-audit showed that the time taken to completion was reduced by 2 visits for both denture types and the average length of time was reduced from 31 weeks to 22 weeks. These improvements were directly related to improved supervision by senior staff.
Computation of ancestry scores with mixed families and unrelated individuals.
Zhou, Yi-Hui; Marron, James S; Wright, Fred A
2018-03-01
The issue of robustness to family relationships in computing genotype ancestry scores such as eigenvector projections has received increased attention in genetic association, and is particularly challenging when sets of both unrelated individuals and closely related family members are included. The current standard is to compute loadings (left singular vectors) using unrelated individuals and to compute projected scores for remaining family members. However, projected ancestry scores from this approach suffer from shrinkage toward zero. We consider two main novel strategies: (i) matrix substitution based on decomposition of a target family-orthogonalized covariance matrix, and (ii) using family-averaged data to obtain loadings. We illustrate the performance via simulations, including resampling from 1000 Genomes Project data, and analysis of a cystic fibrosis dataset. The matrix substitution approach has similar performance to the current standard, but is simple and uses only a genotype covariance matrix, while the family-average method shows superior performance. Our approaches are accompanied by novel ancillary approaches that provide considerable insight, including individual-specific eigenvalue scree plots. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Y.; Cheng, T. -L.; Wen, Y. H.
Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less
Lei, Y.; Cheng, T. -L.; Wen, Y. H.
2017-07-05
Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less
Adjuvant corneal crosslinking to prevent hyperopic LASIK regression.
Aslanides, Ioannis M; Mukherjee, Achyut N
2013-01-01
To report the long term outcomes, safety, stability, and efficacy in a pilot series of simultaneous hyperopic laser assisted in situ keratomileusis (LASIK) and corneal crosslinking (CXL). A small cohort series of five eyes, with clinically suboptimal topography and/or thickness, underwent LASIK surgery with immediate riboflavin application under the flap, followed by UV light irradiation. Postoperative assessment was performed at 1, 3, 6, and 12 months, with late follow up at 4 years, and results were compared with a matched cohort that received LASIK only. The average age of the LASIK-CXL group was 39 years (26-46), and the average spherical equivalent hyperopic refractive error was +3.45 diopters (standard deviation 0.76; range 2.5 to 4.5). All eyes maintained refractive stability over the 4 years. There were no complications related to CXL, and topographic and clinical outcomes were as expected for standard LASIK. This limited series suggests that simultaneous LASIK and CXL for hyperopia is safe. Outcomes of the small cohort suggest that this technique may be promising for ameliorating hyperopic regression, presumed to be biomechanical in origin, and may also address ectasia risk.
Darsová, Denisa; Pochop, Pavel; Štěpánková, Jana; Dotřelová, Dagmar
2018-01-01
To evaluate the efficacy of pars plana vitrectomy (PPV) as an anti-inflammatory therapy in pediatric recurrent intermediate uveitis. A retrospective study evaluated the long-term results of PPV indicated for intermediate uveitis with a mean observation period of 10.3 years (range 7-15.6 years) in 6 children (mean age 8 years, range 6-12 years). Pars plana vitrectomy was performed on 10 eyes in the standard manner and was initiated by vitreous sampling for laboratory examination. Data recorded were perioperative or postoperative vitrectomy complications, anatomic and functional results of PPV, and preoperative and postoperative best-corrected Snellen visual acuity. No perioperative or postoperative complications were observed. Bacteriologic, virologic, mycotic, and cytologic analysis of the vitreous was negative in all tested children. Five eyes were subsequently operated on for posterior subcapsular cataracts. An average preoperative visual acuity of 0.32 improved to an average postoperative visual acuity of 0.8. In the case of systemic immunosuppressive treatment failure in pediatric uveitis, particularly in eyes with cystoid macular edema, we recommend PPV relatively early.
Potentiometric sensors for the selective determination of sulbutiamine.
Ahmed, M A; Elbeshlawy, M M
1999-11-01
Five novel polyvinyl chloride (PVC) matrix membrane sensors for the selective determination of sulbutiamine (SBA) cation are described. These sensors are based on molybdate, tetraphenylborate, reineckate, phosphotun gestate and phosphomolybdate, as possible ion-pairing agents. These sensors display rapid near-Nernstian stable response over a relatively wide concentration range 1x10(-2)-1x10(-6) M of sulbutiamine, with calibration slopes 28 32.6 mV decade(-1) over a reasonable pH range 2-6. The proposed sensors proved to have a good selectivity for SBA over some inorganic and organic cations. The five potentiometric sensors were applied successfully in the determination of SBA in a pharmaceutical preparation (arcalion-200) using both direct potentiometry and potentiometric titration. Direct potentiometric determination of microgram quantities of SBA gave average recoveries of 99.4 and 99.3 with mean standard deviation of 0.7 and 0.3 for pure SBA and arcalion-200 formulation respectively. Potentiometric titration of milligram quantities of SBA gave average recoveries of 99.3 and 98.7% with mean standard deviation of 0.7 and 1.2 for pure SBA and arcalion-200 formulation, respectively.
Fujino, Yuri; Asaoka, Ryo; Murata, Hiroshi; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki
2016-04-01
To develop a large-scale real clinical database of glaucoma (Japanese Archive of Multicentral Databases in Glaucoma: JAMDIG) and to investigate the effect of treatment. The study included a total of 1348 eyes of 805 primary open-angle glaucoma patients with 10 visual fields (VFs) measured with 24-2 or 30-2 Humphrey Field Analyzer (HFA) and intraocular pressure (IOP) records in 10 institutes in Japan. Those with 10 reliable VFs were further identified (638 eyes of 417 patients). Mean total deviation (mTD) of the 52 test points in the 24-2 HFA VF was calculated, and the relationship between mTD progression rate and seven variables (age, mTD of baseline VF, average IOP, standard deviation (SD) of IOP, previous argon/selective laser trabeculoplasties (ALT/SLT), previous trabeculectomy, and previous trabeculotomy) was analyzed. The mTD in the initial VF was -6.9 ± 6.2 dB and the mTD progression rate was -0.26 ± 0.46 dB/year. Mean IOP during the follow-up period was 13.5 ± 2.2 mm Hg. Age and SD of IOP were related to mTD progression rate. However, in eyes with average IOP below 15 and also 13 mm Hg, only age and baseline VF mTD were related to mTD progression rate. Age and the degree of VF damage were related to future progression. Average IOP was not related to the progression rate; however, fluctuation of IOP was associated with faster progression, although this was not the case when average IOP was below 15 mm Hg.
Schlunssen, V; Sigsgaard, T; Schaumburg, I; Kromhout, H
2004-01-01
Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest. Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers. Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored. Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast. Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models. PMID:15377768
Grundstein, Andrew J; Hosokawa, Yuri; Casa, Douglas J
2018-01-01
Weather-based activity modification in athletics is an important way to minimize heat illnesses. However, many commonly used heat-safety guidelines include a uniform set of heat-stress thresholds that do not account for geographic differences in acclimatization. To determine if heat-related fatalities among American football players occurred on days with unusually stressful weather conditions based on the local climate and to assess the need for regional heat-safety guidelines. Cross-sectional study. Data from incidents of fatal exertional heat stroke (EHS) in American football players were obtained from the National Center for Catastrophic Sport Injury Research and the Korey Stringer Institute. Sixty-one American football players at all levels of competition with fatal EHSs from 1980 to 2014. We used the wet bulb globe temperature (WBGT) and a z-score WBGT standardized to local climate conditions from 1991 to 2010 to assess the absolute and relative magnitudes of heat stress, respectively. We observed a poleward decrease in exposure WBGTs during fatal EHSs. In milder climates, 80% of cases occurred at above-average WBGTs, and 50% occurred at WBGTs greater than 1 standard deviation from the long-term mean; however, in hotter climates, half of the cases occurred at near average or below average WBGTs. The combination of lower exposure WBGTs and frequent extreme climatic values in milder climates during fatal EHSs indicates the need for regional activity-modification guidelines with lower, climatically appropriate weather-based thresholds. Established activity-modification guidelines, such as those from the American College of Sports Medicine, work well in the hotter climates, such as the southern United States, where hot and humid weather conditions are common.
ALCOHOL CONTENT VARIATION OF BAR AND RESTAURANT DRINKS IN NORTHERN CALIFORNIA
Kerr, William C.; Patterson, Deidre; Koenen, Mary Albert; Greenfield, Thomas K.
2008-01-01
Objective To estimate the average of, and sources of variation in, the alcohol content of drinks served on-premise in 10 Northern California counties. Methods Focus groups of bartenders were conducted to evaluate potential sources of drink alcohol content variation. In the main study, 80 establishments were visited by a team of research personnel who purchased and measured the volume of particular beer, wine and spirits drinks. Brand or analysis of a sample of the drink was used to determine the alcohol concentration by volume. Results The average wine drink was found to be 43% larger than a standard drink with no difference between red and white wine. The average draught beer was 22% larger than the standard. Spirits drinks differed by type with the average shot being equal to one standard drink while mixed drinks were 42% larger. Variation in alcohol content was particularly wide for wine and mixed spirits drinks. No significant differences in mean drink alcohol content were seen by county for beer or spirits but one county was lower than two others for wine. Conclusions On premise drinks typically contained more alcohol than the standard drink with the exception of shots and bottled beers. Wine and mixed spirits drinks were the largest with nearly 1.5 times the alcohol of a standard drink on average. Consumers should be made aware of these substantial differences and key sources of variation in drink alcohol content and research studies should utilize this information in the interpretation of reported numbers of drinks. PMID:18616674
Accounting for heterogeneous treatment effects in the FDA approval process.
Malani, Anup; Bembom, Oliver; van der Laan, Mark
2012-01-01
The FDA employs an average-patient standard when reviewing drugs: it approves a drug only if is safe and effective for the average patient in a clinical trial. It is common, however, for patients to respond differently to a drug. Therefore, the average-patient standard can reject a drug that benefits certain patient subgroups (false negatives) and even approve a drug that harms other patient subgroups (false positives). These errors increase the cost of drug development - and thus health care - by wasting research on unproductive or unapproved drugs. The reason why the FDA sticks with an average patient standard is concern about opportunism by drug companies. With enough data dredging, a drug company can always find some subgroup of patients that appears to benefit from its drug, even if the subgroup truly does not. In this paper we offer alternatives to the average patient standard that reduce the risk of false negatives without increasing false positives from drug company opportunism. These proposals combine changes to institutional design - evaluation of trial data by an independent auditor - with statistical tools to reinforce the new institutional design - specifically, to ensure the auditor is truly independent of drug companies. We illustrate our proposals by applying them to the results of a recent clinical trial of a cancer drug (motexafin gadolinium). Our analysis suggests that the FDA may have made a mistake in rejecting that drug.
Why is the age-standardized incidence of low-trauma fractures rising in many elderly populations?
Kannus, Pekka; Niemi, Seppo; Parkkari, Jari; Palvanen, Mika; Heinonen, Ari; Sievänen, Harri; Järvinen, Teppo; Khan, Karim; Järvinen, Markku
2002-08-01
Low-trauma fractures of elderly people are a major public health burden worldwide, and as the number and mean age of older adults in the population continue to increase, the number of fractures is also likely to increase. Epidemiologically, however, an additional concern is that, for unknown reasons, the age-standardized incidence (average individual risk) of fracture has also risen in many populations during the recent decades. Possible reasons for this rise include a birth cohort effect, deterioration in the average bone strength by time, and increased average risk of (serious) falls. Literature provides evidence that the rise is not due to a birth cohort effect, whereas no study shows whether bone fragility has increased during this relatively short period of time. This osteoporosis hypothesis could, however, be tested if researchers would now repeat the population measurements of bone mass and density that were made in the late 1980s and the 1990s. If such studies proved that women's and men's age-standardized mean values of bone mass and density have declined over time, the osteoporosis hypothesis would receive scientific support. The third explanation is based on the hypothesis that the number and/or severity of falls has risen in elderly populations during the recent decades. Although no study has directly tested this hypothesis, a great deal of indirect epidemiologic evidence supports this contention. For example, the age-standardized incidence of fall-induced severe head injuries, bruises and contusions, and joint distortions and dislocations has increased among elderly people similarly to the low-trauma fractures. The fall hypothesis could also be tested in the coming years because the 1990s saw many research teams reporting age- and sex-specific incidences of falling for elderly populations, and the same could be done now to provide data comparing the current incidence rates of falls with the earlier ones.
40 CFR 80.315 - How are credits used and what are the limitations on credit use?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How are credits used and what are the... compliance with the transferee's averaging standard, regardless of the transferee's good faith belief that... standards under § 80.195 during the 2005 and 2006 averaging periods. Such credits may be used to demonstrate...
40 CFR 80.315 - How are credits used and what are the limitations on credit use?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false How are credits used and what are the... compliance with the transferee's averaging standard, regardless of the transferee's good faith belief that... standards under § 80.195 during the 2005 and 2006 averaging periods. Such credits may be used to demonstrate...
40 CFR 80.315 - How are credits used and what are the limitations on credit use?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How are credits used and what are the... compliance with the transferee's averaging standard, regardless of the transferee's good faith belief that... standards under § 80.195 during the 2005 and 2006 averaging periods. Such credits may be used to demonstrate...
40 CFR 80.315 - How are credits used and what are the limitations on credit use?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false How are credits used and what are the... compliance with the transferee's averaging standard, regardless of the transferee's good faith belief that... standards under § 80.195 during the 2005 and 2006 averaging periods. Such credits may be used to demonstrate...
40 CFR 80.315 - How are credits used and what are the limitations on credit use?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How are credits used and what are the... compliance with the transferee's averaging standard, regardless of the transferee's good faith belief that... standards under § 80.195 during the 2005 and 2006 averaging periods. Such credits may be used to demonstrate...
Remote determination of the velocity index and mean streamwise velocity profiles
NASA Astrophysics Data System (ADS)
Johnson, E. D.; Cowen, E. A.
2017-09-01
When determining volumetric discharge from surface measurements of currents in a river or open channel, the velocity index is typically used to convert surface velocities to depth-averaged velocities. The velocity index is given by, k=Ub/Usurf, where Ub is the depth-averaged velocity and Usurf is the local surface velocity. The USGS (United States Geological Survey) standard value for this coefficient, k = 0.85, was determined from a series of laboratory experiments and has been widely used in the field and in laboratory measurements of volumetric discharge despite evidence that the velocity index is site-specific. Numerous studies have documented that the velocity index varies with Reynolds number, flow depth, and relative bed roughness and with the presence of secondary flows. A remote method of determining depth-averaged velocity and hence the velocity index is developed here. The technique leverages the findings of Johnson and Cowen (2017) and permits remote determination of the velocity power-law exponent thereby, enabling remote prediction of the vertical structure of the mean streamwise velocity, the depth-averaged velocity, and the velocity index.
Wu, Yifeng; Zhao, Fengmin; Qian, Xujun; Xu, Guozhang; He, Tianfeng; Shen, Yueping; Cai, Yibiao
2015-07-01
To describe the daily average concentration of sulfur dioxide (SO2) in Ningbo, and to analysis the health impacts it caused in upper respiratory disease. With outpatients log and air pollutants monitoring data matched in 2011-2013, the distributed lag non-linear models were used to analysis the relative risk of the number of upper respiratory patients associated with SO2, and also excessive risk, and the inferred number of patients due to SO2 pollution. The daily average concentration of SO2 didn't exceed the limit value of second class area. The coefficient of upper respiratory outpatient number and daily average concentration of SO2 matched was 0.44,with the excessive risk was 10% to 18%, the lag of most SO2 concentrations was 4 to 6 days. It could be estimated that about 30% of total upper respiratory outpatients were caused by SO2 pollution. Although the daily average concentration of SO2 didn't exceed the standard in 3 years, the health impacts still be caused with lag effect.
Dental practice during a world cruise: characterisation of oral health at sea.
Sobotta, Bernhard A J; John, Mike T; Nitschke, Ina
2006-01-01
To describe oral health of passengers and crew attending the dental service aboard during a two months world cruise. In a retrospective, descriptive epidemiologic study design the routine documentation of all dental treatment provided at sea was analysed after the voyage. Subjects were n = 57 passengers (3.5 % of 1619) with a mean age of 71 (+/- 9.8) years and n =56 crew (5.6 % of 999) with a mean age of 37 (+/- 12.0) years. Age, gender, nationality, number of natural teeth and implants were extracted. The prosthetic status was described by recording the number of teeth replaced by fixed prosthesis and number of teeth replaced by removable prosthesis. Oral health-related quality of life (OHRQoL) was measured using the 14-item Oral Health Impact Profile (OHIP-14) and characterised by the OHIP sum score. Women attended for treatment more often than men. Passengers had a mean number of 20 natural teeth plus substantial fixed and removable prosthodontics. Crew had a mean of 26 teeth. British crew and Australian passengers attended the dental service above average. Crew tended to have a higher average OHIP-14 sum score than passengers indicating an increased rate of perceived problems. Emergency patients from both crew and passengers have a higher sum score than patients attending for routine treatment. In passengers the average number of teeth appears to be higher than that of an age matched population of industrialized countries. However, the passengers' socioeconomic status was higher which has an effect on this finding. Socioeconomic factors also serve to explain the high standard of prosthetic care in passengers. Crew in general present with less sophisticated prosthetic devices. This is in line with their different socioeconomic status and origin from developing countries. The level of dental fees aboard in comparison to treatment costs in home countries may explain some of the differences in attendance. Passengers have enjoyed high standards of prosthetic care in the past and will expect a similarly high standard from ship based facilities. The ease of access to quality dental care may explain the relatively low level of perceived problems as characterised by oral health-related quality of life scores. The dental officer aboard has to be prepared to care for very varied diagnostic and treatment needs.
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
Jessen, Wilko; Wilbert, Stefan; Gueymard, Christian A.; ...
2018-04-10
Reference solar irradiance spectra are needed to specify key parameters of solar technologies such as photovoltaic cell efficiency, in a comparable way. The IEC 60904-3 and ASTM G173 standards present such spectra for Direct Normal Irradiance (DNI) and Global Tilted Irradiance (GTI) on a 37 degrees tilted sun-facing surface for one set of clear-sky conditions with an air mass of 1.5 and low aerosol content. The IEC/G173 standard spectra are the widely accepted references for these purposes. Hence, the authors support the future replacement of the outdated ISO 9845 spectra with the IEC spectra within the ongoing update of thismore » ISO standard. The use of a single reference spectrum per component of irradiance is important for clarity when comparing and rating solar devices such as PV cells. However, at some locations the average spectra can differ strongly from those defined in the IEC/G173 standards due to widely different atmospheric conditions and collector tilt angles. Therefore, additional subordinate standard spectra for other atmospheric conditions and tilt angles are of interest for a rough comparison of product performance under representative field conditions, in addition to using the main standard spectrum for product certification under standard test conditions. This simplifies the product selection for solar power systems when a fully-detailed performance analysis is not feasible (e.g. small installations). Also, the effort for a detailed yield analyses can be reduced by decreasing the number of initial product options. After appropriate testing, this contribution suggests a number of additional spectra related to eight sets of atmospheric conditions and tilt angles that are currently considered within ASTM and ISO working groups. The additional spectra, called subordinate standard spectra, are motivated by significant spectral mismatches compared to the IEC/G173 spectra (up to 6.5%, for PV at 37 degrees tilt and 10-15% for CPV). These mismatches correspond to potential accuracy improvements for a quick estimation of the average efficiency by applying the appropriate subordinate standard spectrum instead of the IEC/G173 spectra. The applicability of these spectra for PV performance analyses is confirmed at five test sites, for which subordinate spectra could be intuitively selected based on the average atmospheric aerosol optical depth (AOD) and precipitable water vapor at those locations. The development of subordinate standard spectra for DNI and concentrating solar power (CSP) and concentrating PV (CPV) is also considered. However, it is found that many more sets of atmospheric conditions would be required to allow the intuitive selection of DNI spectra for the five test sites, due in particular to the stronger effect of AOD on DNI compared to GTI. The matrix of subordinate GTI spectra described in this paper are recommended to appear as an option in the annex of future standards, in addition to the obligatory use of the main spectrum from the ASTM G173 and IEC 60904 standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessen, Wilko; Wilbert, Stefan; Gueymard, Christian A.
Reference solar irradiance spectra are needed to specify key parameters of solar technologies such as photovoltaic cell efficiency, in a comparable way. The IEC 60904-3 and ASTM G173 standards present such spectra for Direct Normal Irradiance (DNI) and Global Tilted Irradiance (GTI) on a 37 degrees tilted sun-facing surface for one set of clear-sky conditions with an air mass of 1.5 and low aerosol content. The IEC/G173 standard spectra are the widely accepted references for these purposes. Hence, the authors support the future replacement of the outdated ISO 9845 spectra with the IEC spectra within the ongoing update of thismore » ISO standard. The use of a single reference spectrum per component of irradiance is important for clarity when comparing and rating solar devices such as PV cells. However, at some locations the average spectra can differ strongly from those defined in the IEC/G173 standards due to widely different atmospheric conditions and collector tilt angles. Therefore, additional subordinate standard spectra for other atmospheric conditions and tilt angles are of interest for a rough comparison of product performance under representative field conditions, in addition to using the main standard spectrum for product certification under standard test conditions. This simplifies the product selection for solar power systems when a fully-detailed performance analysis is not feasible (e.g. small installations). Also, the effort for a detailed yield analyses can be reduced by decreasing the number of initial product options. After appropriate testing, this contribution suggests a number of additional spectra related to eight sets of atmospheric conditions and tilt angles that are currently considered within ASTM and ISO working groups. The additional spectra, called subordinate standard spectra, are motivated by significant spectral mismatches compared to the IEC/G173 spectra (up to 6.5%, for PV at 37 degrees tilt and 10-15% for CPV). These mismatches correspond to potential accuracy improvements for a quick estimation of the average efficiency by applying the appropriate subordinate standard spectrum instead of the IEC/G173 spectra. The applicability of these spectra for PV performance analyses is confirmed at five test sites, for which subordinate spectra could be intuitively selected based on the average atmospheric aerosol optical depth (AOD) and precipitable water vapor at those locations. The development of subordinate standard spectra for DNI and concentrating solar power (CSP) and concentrating PV (CPV) is also considered. However, it is found that many more sets of atmospheric conditions would be required to allow the intuitive selection of DNI spectra for the five test sites, due in particular to the stronger effect of AOD on DNI compared to GTI. The matrix of subordinate GTI spectra described in this paper are recommended to appear as an option in the annex of future standards, in addition to the obligatory use of the main spectrum from the ASTM G173 and IEC 60904 standards.« less
Koenig, Bruce E; Lacey, Douglas S
2014-07-01
In this research project, nine small digital audio recorders were tested using five sets of 30-min recordings at all available recording modes, with consistent audio material, identical source and microphone locations, and identical acoustic environments. The averaged direct current (DC) offset values and standard deviations were measured for 30-sec and 1-, 2-, 3-, 6-, 10-, 15-, and 30-min segments. The research found an inverse association between segment lengths and the standard deviation values and that lengths beyond 30 min may not meaningfully reduce the standard deviation values. This research supports previous studies indicating that measured averaged DC offsets should only be used for exclusionary purposes in authenticity analyses and exhibit consistent values when the general acoustic environment and microphone/recorder configurations were held constant. Measured average DC offset values from exemplar recorders may not be directly comparable to those of submitted digital audio recordings without exactly duplicating the acoustic environment and microphone/recorder configurations. © 2014 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bache, S; Loyer, E; Stauduhar, P
2015-06-15
Purpose: To quantify and compare the noise properties between two GE CT models-the Discovery CT750 HD (aka HD750) and LightSpeed VCT, with the overall goal of assessing the impact in clinical diagnostic practice. Methods: Daily QC data from a fleet of 9 CT scanners currently in clinical use were investigated – 5 HD750 and 4 VCT (over 600 total acquisitions for each scanner). A standard GE QC phantom was scanned daily using two sets of scan parameters with each scanner over 1 year. Water CT number and standard deviation were recorded from the image of water section of the QCmore » phantom. The standard GE QC scan parameters (Pitch = 0.516, 120kVp, 0.4s, 335mA, Small Body SFOV, 5mm thickness) and an in-house developed protocol (Axial, 120kVp, 1.0s, 240mA, Head SFOV, 5mm thickness) were used, with Standard reconstruction algorithm. Noise was measured as the standard deviation in the center of the water phantom image. Inter-model noise distributions and tube output in mR/mAs were compared to assess any relative differences in noise properties. Results: With the in-house protocols, average noise for the five HD750 scanners was ∼9% higher than the VCT scanners (5.8 vs 5.3). For the GE QC protocol, average noise with the HD750 scanners was ∼11% higher than with the VCT scanners (4.8 vs 4.3). This discrepancy in noise between the two models was found despite the tube output in mR/mAs being comparable with the HD750 scanners only having ∼4% lower output (8.0 vs 8.3 mR/mAs). Conclusion: Using identical scan protocols, average noise in images from the HD750 group was higher than that from the VCT group. This confirms feedback from an institutional radiologist’s feedback regarding grainier patient images from HD750 scanners. Further investigation is warranted to assess the noise texture and distribution, as well as clinical impact.« less
Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.
2016-01-01
Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.
NASA Astrophysics Data System (ADS)
Slanina, J.; Möls, J. J.; Baard, J. H.
The results of a wet deposition monitoring experiment, carried out by eight identical wet-only precipitation samplers operating on the basis of 24 h samples, have been used to investigate the accuracy and uncertainties in wet deposition measurements. The experiment was conducted near Lelystad, The Netherlands over the period 1 March 1983-31 December 1985. By rearranging the data for one to eight samplers and sampling periods of 1 day to 1 month both systematic and random errors were investigated as a function of measuring strategy. A Gaussian distribution of the results was observed. Outliers, detected by a Dixon test ( a = 0.05) influenced strongly both the yearly averaged results and the standard deviation of this average as a function of the number of samplers and the length of the sampling period. The systematic bias in bulk elements, using one sampler, varies typically from 2 to 20% and for trace elements from 10 to 500%, respectively. Severe problems are encountered in the case of Zn, Cu, Cr, Ni and especially Cd. For the sensitive detection of trends generally more than one sampler per measuring station is necessary as the standard deviation in the yearly averaged wet deposition is typically 10-20% relative for one sampler. Using three identical samplers, trends of, e.g. 3% per year will be generally detected in 6 years.
Zhang, Pei-Feng; Hu, Yuan-Man; Xiong, Zai-Ping; Liu, Miao
2011-02-01
Based on the 1:10000 aerial photo in 1997 and the three QuickBird images in 2002, 2005, and 2008, and by using Barista software and GIS and RS techniques, the three-dimensional information of the residential community in Tiexi District of Shenyang was extracted, and the variation pattern of the three-dimensional landscape in the district during its reconstruction in 1997-2008 and related affecting factors were analyzed with the indices, ie. road density, greening rate, average building height, building height standard deviation, building coverage rate, floor area rate, building shape coefficient, population density, and per capita GDP. The results showed that in 1997-2008, the building area for industry decreased, that for commerce and other public affairs increased, and the area for residents, education, and medical cares basically remained stable. The building number, building coverage rate, and building shape coefficient decreased, while the floor area rate, average building height, height standard deviation, road density, and greening rate increased. Within the limited space of residential community, the containing capacity of population and economic activity increased, and the environment quality also improved to some extent. The variation degree of average building height increased, but the building energy consumption decreased. Population growth and economic development had positive correlations with floor area rate, road density, and greening rate, but negative correlation with building coverage rate.
Encoding probabilistic brain atlases using Bayesian inference.
Van Leemput, Koen
2009-06-01
This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.
Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang
2016-10-03
Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.
Average blood pressure and cardiovascular disease-related mortality in middle-aged women.
van Trijp, Marijke J C A; Grobbee, Diederick E; Peeters, Petra H M; van Der Schouw, Yvonne T; Bots, Michiel L
2005-02-01
The aim of this study was to assess which average blood pressure (BP) component (ie, systolic BP [SBP], diastolic BP [DBP], pulse pressure [PP], or mean arterial pressure [MAP]), is most strongly related to cardiovascular disease (CVD)-related mortality and to evaluate whether the strength of the relation varies with follow-up time. This was a prospective cohort study. The studied cohort comprised a population of postmenopausal women (n = 7813) between the ages of 49 and 66 years of age, of whom four BP measurements were available, obtained at four different time points. Average BP, ie, the mean of the four measurements divided by the standard deviation, was entered in Cox proportional hazards models to facilitate direct comparison. Hazard ratios (HR) were calculated adjusted for age, body mass index, presence of diabetes mellitus, smoking habit, and use of BP-lowering medication. In addition analyses were repeated in strata of follow-up time (10, 15, and 20 years). During a mean follow-up of 13.1 years, 463 CVD-related deaths occurred. For SBP and MAP the highest HR for CVD mortality were found; however, the confidence intervals (CI) overlapped (SBP: HR = 1.43, 95% CI = 1.30 to 1.58; DBP: HR = 1.35, 95% CI = 1.23 to 1.50; PP: HR = 1.30, 95% CI = 1.19 to 1.42; MAP: HR = 1.43, 95% CI = 1.30 to 1.58). Analyses in strata of follow-up time did not show a difference in strength of the associations with increasing follow-up time. In this prospective follow-up study of postmenopausal women, SBP and MAP seemed to be strongest related with CVD-related death; however the CI of the HR overlapped.
Models for Aircrew Safety Assessment: Uses, Limitations and Requirements
1999-08-01
analysis, a linear relationship was determined from the Methods density standard relating Hounsfield unit to concentration of K2HPO 4. Elliptical...males and 25 females, structures. The average Hounsfield unit in the ROI was then betweenthe agese of18tol consistwednthewe s of 20 me ad 2converted to...Centre for Human Sciences F138 Building - Room 204 DERA Farnborough, Hants GU14 6TD United Kingdom TECHNICAL PROGRAMME COMMITTEE Chairmen Dr I. KALEPS
Payami, Haydeh; Kay, Denise M; Zabetian, Cyrus P; Schellenberg, Gerard D; Factor, Stewart A; McCulloch, Colin C
2010-01-01
Age-related variation in marker frequency can be a confounder in association studies, leading to both false-positive and false-negative findings and subsequently to inconsistent reproducibility. We have developed a simple method, based on a novel extension of moving average plots (MAP), which allows investigators to inspect the frequency data for hidden age-related variations. MAP uses the standard case-control association data and generates a birds-eye view of the frequency distributions across the age spectrum; a picture in which one can see if, how, and when the marker frequencies in cases differ from that in controls. The marker can be specified as an allele, genotype, haplotype, or environmental factor; and age can be age-at-onset, age when subject was last known to be unaffected, or duration of exposure. Signature patterns that emerge can help distinguish true disease associations from spurious associations due to age effects, age-varying associations from associations that are uniform across all ages, and associations with risk from associations with age-at-onset. Utility of MAP is illustrated by application to genetic and epidemiological association data for Alzheimer's and Parkinson's disease. MAP is intended as a descriptive method, to complement standard statistical techniques. Although originally developed for age patterns, MAP is equally useful for visualizing any quantitative trait.
Covariate selection with group lasso and doubly robust estimation of causal effects
Koch, Brandon; Vock, David M.; Wolfson, Julian
2017-01-01
Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276
Covariate selection with group lasso and doubly robust estimation of causal effects.
Koch, Brandon; Vock, David M; Wolfson, Julian
2018-03-01
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.
Hostetler, K.A.; Thurman, E.M.
2000-01-01
Analytical methods using high-performance liquid chromatography-diode array detection (HPLC-DAD) and high-performance liquid chromatography/mass spectrometry (HPLC/MS) were developed for the analysis of the following chloroacetanilide herbicide metabolites in water: alachlor ethanesulfonic acid (ESA); alachlor oxanilic acid; acetochlor ESA; acetochlor oxanilic acid; metolachlor ESA; and metolachlor oxanilic acid. Good precision and accuracy were demonstrated for both the HPLC-DAD and HPLC/MS methods in reagent water, surface water, and ground water. The average HPLC-DAD recoveries of the chloroacetanilide herbicide metabolites from water samples spiked at 0.25, 0.5 and 2.0 ??g/l ranged from 84 to 112%, with relative standard deviations of 18% or less. The average HPLC/MS recoveries of the metabolites from water samples spiked at 0.05, 0.2 and 2.0 ??g/l ranged from 81 to 118%, with relative standard deviations of 20% or less. The limit of quantitation (LOQ) for all metabolites using the HPLC-DAD method was 0.20 ??g/l, whereas the LOQ using the HPLC/MS method was at 0.05 ??g/l. These metabolite-determination methods are valuable for acquiring information about water quality and the fate and transport of the parent chloroacetanilide herbicides in water. Copyright (C) 2000 Elsevier Science B.V.
Chen, Raymond; Ilasi, Nicholas; Sekulic, Sonja S
2011-12-05
Molecular weight distribution is an important quality attribute for hypromellose acetate succinate (HPMCAS), a pharmaceutical excipient used in spray-dried dispersions. Our previous study showed that neither relative nor universal calibration method of size exclusion chromatography (SEC) works for HPMCAS polymers. We here report our effort to develop a SEC method using a mass sensitive multi angle laser light scattering detector (MALLS) to determine molecular weight distributions of HPMCAS polymers. A solvent screen study reveals that a mixed solvent (60:40%, v/v 50mM NaH(2)PO(4) with 0.1M NaNO(3) buffer: acetonitrile, pH* 8.0) is the best for HPMCAS-LF and MF sub-classes. Use of a mixed solvent creates a challenging condition for the method that uses refractive index detector. Therefore, we thoroughly evaluated the method performance and robustness. The mean weight average molecular weight of a polyethylene oxide standard has a 95% confidence interval of (28,443-28,793) g/mol vs. 28,700g/mol from the Certificate of Analysis. The relative standard deviations of average molecular weights for all polymers are 3-6%. These results and the Design of Experiments study demonstrate that the method is accurate and robust. Copyright © 2011 Elsevier B.V. All rights reserved.
Angly, Florent E; Willner, Dana; Prieto-Davó, Alejandra; Edwards, Robert A; Schmieder, Robert; Vega-Thurber, Rebecca; Antonopoulos, Dionysios A; Barott, Katie; Cottrell, Matthew T; Desnues, Christelle; Dinsdale, Elizabeth A; Furlan, Mike; Haynes, Matthew; Henn, Matthew R; Hu, Yongfei; Kirchman, David L; McDole, Tracey; McPherson, John D; Meyer, Folker; Miller, R Michael; Mundt, Egbert; Naviaux, Robert K; Rodriguez-Mueller, Beltran; Stevens, Rick; Wegley, Linda; Zhang, Lixin; Zhu, Baoli; Rohwer, Forest
2009-12-01
Metagenomic studies characterize both the composition and diversity of uncultured viral and microbial communities. BLAST-based comparisons have typically been used for such analyses; however, sampling biases, high percentages of unknown sequences, and the use of arbitrary thresholds to find significant similarities can decrease the accuracy and validity of estimates. Here, we present Genome relative Abundance and Average Size (GAAS), a complete software package that provides improved estimates of community composition and average genome length for metagenomes in both textual and graphical formats. GAAS implements a novel methodology to control for sampling bias via length normalization, to adjust for multiple BLAST similarities by similarity weighting, and to select significant similarities using relative alignment lengths. In benchmark tests, the GAAS method was robust to both high percentages of unknown sequences and to variations in metagenomic sequence read lengths. Re-analysis of the Sargasso Sea virome using GAAS indicated that standard methodologies for metagenomic analysis may dramatically underestimate the abundance and importance of organisms with small genomes in environmental systems. Using GAAS, we conducted a meta-analysis of microbial and viral average genome lengths in over 150 metagenomes from four biomes to determine whether genome lengths vary consistently between and within biomes, and between microbial and viral communities from the same environment. Significant differences between biomes and within aquatic sub-biomes (oceans, hypersaline systems, freshwater, and microbialites) suggested that average genome length is a fundamental property of environments driven by factors at the sub-biome level. The behavior of paired viral and microbial metagenomes from the same environment indicated that microbial and viral average genome sizes are independent of each other, but indicative of community responses to stressors and environmental conditions.
[Observation on atmospheric pollution in Xianghe during Beijing 2008 Olympic Games].
Pan, Yue-Peng; Wang, Yue-Si; Hu, Bo; Liu, Quan; Wang, Ying-Hong; Nan, Wei-Dong
2010-01-01
There is a concern that much of the atmospheric pollution experienced in Beijing is regional in nature and not attributable to local sources. The objective of this study is to examine the contribution of sources outside Beijing to atmospheric pollution levels during Beijing 2008 Olympic Games. The observations of SO2, NO(x), O3, PM2.5 and PM10 were conducted from June 1 to September 30, 2008 in Xianghe, a rural site about 70 km southeast of Beijing. Sources and transportation of atmospheric pollution during the experiment were discussed with surface meteorology data and backward trajectories calculated using HYSPLIT model. The results showed that the daily average maximum (mean +/- standard deviation) concentrations of SO2, NO(x), O3, PM2.5, and PM10 during observation reached 84.4(13.4 +/- 15.2), 43.3 (15.9 +/- 9.1), 230 (82 +/- 38), 184 (76 +/- 42) and 248 (113 +/- 52) microg x m(-3), respectively. In particular, during the pollution episodes from July 20 to August 12, the hourly average concentration of O3 exceeded the National Ambient Air Quality Standard II for 46 h (9%), and the daily average concentration of PM10 exceeded the Standard for 11 d (46%); PM2.5 exceeded the US EPA Standard for 18 d (75%). The daily average concentrations of SO2, NO(x), O3, PM2.5 and PM10 decreased from 27.7, 18.6, 96, 90, 127 microg x m(-3) in June-July to 5.8, 13.2, 80, 60, 106 microg x m(-3) during Olympic Games (August-September), respectively. The typical diurnal variations of NO(x), PM2.5 and PM10 were similar, peaking at 07:00 and 20:00, while the maximum of O3 occurred between 14:00 to 16:00 local time. The findings also suggested that the atmospheric pollution in Xianghe is related to local emission, regional transport as well as the meteorological conditions. Northerly wind and precipitation are favorable for diffusion and wet deposition of pollutants, while sustained south flows make the atmospheric pollution more serious. The lead-lag correlation analysis during the pollution episodes from July 20 to August 12 showed that there are about 6-10 h (0.57 < r < 0.65, p = 0.01) of hourly average PM2.5 in Beijing lagging Xianghe, reaching the maximum at 8 h, which indicates that the real-time atmospheric PM2.5 database of Xianghe might provides early warning for the Beijing PM2.5 pollution events.
Target volume and artifact evaluation of a new data-driven 4D CT.
Martin, Rachael; Pan, Tinsu
Four-dimensional computed tomography (4D CT) is often used to define the internal gross target volume (IGTV) for radiation therapy of lung cancer. Traditionally, this technique requires the use of an external motion surrogate; however, a new image, data-driven 4D CT, has become available. This study aims to describe this data-driven 4D CT and compare target contours created with it to those created using standard 4D CT. Cine CT data of 35 patients undergoing stereotactic body radiation therapy were collected and sorted into phases using standard and data-driven 4D CT. IGTV contours were drawn using a semiautomated method on maximum intensity projection images of both 4D CT methods. Errors resulting from reproducibility of the method were characterized. A comparison of phase image artifacts was made using a normalized cross-correlation method that assigned a score from +1 (data-driven "better") to -1 (standard "better"). The volume difference between the data-driven and standard IGTVs was not significant (data driven was 2.1 ± 1.0% smaller, P = .08). The Dice similarity coefficient showed good similarity between the contours (0.949 ± 0.006). The mean surface separation was 0.4 ± 0.1 mm and the Hausdorff distance was 3.1 ± 0.4 mm. An average artifact score of +0.37 indicated that the data-driven method had significantly fewer and/or less severe artifacts than the standard method (P = 1.5 × 10 -5 for difference from 0). On average, the difference between IGTVs derived from data-driven and standard 4D CT was not clinically relevant or statistically significant, suggesting data-driven 4D CT can be used in place of standard 4D CT without adjustments to IGTVs. The relatively large differences in some patients were usually attributed to limitations in automatic contouring or differences in artifacts. Artifact reduction and setup simplicity suggest a clinical advantage to data-driven 4D CT. Published by Elsevier Inc.
Intensive glycemic control is not associated with fractures or falls in the ACCORD randomized trial.
Schwartz, Ann V; Margolis, Karen L; Sellmeyer, Deborah E; Vittinghoff, Eric; Ambrosius, Walter T; Bonds, Denise E; Josse, Robert G; Schnall, Adrian M; Simmons, Debra L; Hue, Trisha F; Palermo, Lisa; Hamilton, Bruce P; Green, Jennifer B; Atkinson, Hal H; O'Connor, Patrick J; Force, Rex W; Bauer, Douglas C
2012-07-01
Older adults with type 2 diabetes are at high risk of fractures and falls, but the effect of glycemic control on these outcomes is unknown. To determine the effect of intensive versus standard glycemic control, we assessed fractures and falls as outcomes in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) randomized trial. ACCORD participants were randomized to intensive or standard glycemia strategies, with an achieved median A1C of 6.4 and 7.5%, respectively. In the ACCORD BONE ancillary study, fractures were assessed at 54 of the 77 ACCORD clinical sites that included 7,287 of the 10,251 ACCORD participants. At annual visits, 6,782 participants were asked about falls in the previous year. During an average follow-up of 3.8 (SD 1.3) years, 198 of 3,655 participants in the intensive glycemia and 189 of 3,632 participants in the standard glycemia group experienced at least one nonspine fracture. The average rate of first nonspine fracture was 13.9 and 13.3 per 1,000 person-years in the intensive and standard groups, respectively (hazard ratio 1.04 [95% CI 0.86-1.27]). During an average follow-up of 2.0 years, 1,122 of 3,364 intensive- and 1,133 of 3,418 standard-therapy participants reported at least one fall. The average rate of falls was 60.8 and 55.3 per 100 person-years in the intensive and standard glycemia groups, respectively (1.10 [0.84-1.43]). Compared with standard glycemia, intensive glycemia did not increase or decrease fracture or fall risk in ACCORD.
Correlating methane production to microbiota in anaerobic digesters fed synthetic wastewater.
Venkiteshwaran, K; Milferstedt, K; Hamelin, J; Fujimoto, M; Johnson, M; Zitomer, D H
2017-03-01
A quantitative structure activity relationship (QSAR) between relative abundance values and digester methane production rate was developed. For this, 50 triplicate anaerobic digester sets (150 total digesters) were each seeded with different methanogenic biomass samples obtained from full-scale, engineered methanogenic systems. Although all digesters were operated identically for at least 5 solids retention times (SRTs), their quasi steady-state function varied significantly, with average daily methane production rates ranging from 0.09 ± 0.004 to 1 ± 0.05 L-CH 4 /L R -day (L R = Liter of reactor volume) (average ± standard deviation). Digester microbial community structure was analyzed using more than 4.1 million partial 16S rRNA gene sequences of Archaea and Bacteria. At the genus level, 1300 operational taxonomic units (OTUs) were observed across all digesters, whereas each digester contained 158 ± 27 OTUs. Digester function did not correlate with typical biomass descriptors such as volatile suspended solids (VSS) concentration, microbial richness, diversity or evenness indices. However, methane production rate did correlate notably with relative abundances of one Archaeal and nine Bacterial OTUs. These relative abundances were used as descriptors to develop a multiple linear regression (MLR) QSAR equation to predict methane production rates solely based on microbial community data. The model explained over 66% of the variance in the experimental data set based on 149 anaerobic digesters with a standard error of 0.12 L-CH 4 /L R -day. This study provides a framework to relate engineered process function and microbial community composition which can be further expanded to include different feed stocks and digester operating conditions in order to develop a more robust QSAR model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-23
... standard was set at 15 micrograms per cubic meter ([mu]g/m\\3\\), based on the 3-year average of annual... 2.5 standard was set at 65 [mu]g/m\\3\\, based on the 3- year average of the 98th percentile of 24... partially approve the submittal based on EPA's independent evaluation of Nevada's impact on receptor states...
SU-E-T-558: Assessing the Effect of Inter-Fractional Motion in Esophageal Sparing Plans.
Williamson, R; Bluett, J; Niedzielski, J; Liao, Z; Gomez, D; Court, L
2012-06-01
To compare esophageal dose distributions in esophageal sparing IMRT plans with predicted dose distributions which include the effect of inter-fraction motion. Seven lung cancer patients were used, each with a standard and an esophageal sparing plan (74Gy, 2Gy fractions). The average max dose to esophagus was 8351cGy and 7758cGy for the standard and sparing plans, respectively. The average length of esophagus for which the total circumference was treated above 60Gy (LETT60) was 9.4cm in the standard plans and 5.8cm in the sparing plans. In order to simulate inter-fractional motion, a three-dimensional rigid shift was applied to the calculated dose field. A simulated course of treatment consisted of a single systematic shift applied throughout the treatment as well a random shift for each of the 37 fractions. Both systematic and random shifts were generated from Gaussian distributions of 3mm and 5mm standard deviation. Each treatment course was simulated 1000 times to obtain an expected distribution of the delivered dose. Simulated treatment dose received by the esophagus was less than dose seen in the treatment plan. The average reduction in maximum esophageal dose for the standard plans was 234cGy and 386cGY for the 3mm and 5mm Gaussian distributions, respectively. The average reduction in LETT60 was 0.6cm and 1.7cm, for the 3mm and 5mm distributions respectively. For the esophageal sparing plans, the average reduction in maximum esophageal dose was 94cGy and 202cGy for 3mm and 5mm Gaussian distributions, respectively. The average change in LETT60 for the esophageal sparing plans was smaller, at 0.1cm (increase) and 0.6cm (reduction), for the 3mm and 5mm distributions, respectively. Interfraction motion consistently reduced the maximum doses to the esophagus for both standard and esophageal sparing plans. © 2012 American Association of Physicists in Medicine.
Trontel, Haley G.; Duffield, Tyler C.; Bigler, Erin D.; Abildskov, Tracy J.; Froehlich, Alyson; Prigge, Molly B.D.; Travers, Brittany G.; Anderson, Jeffrey S.; Zielinski, Brandon A.; Alexander, Andrew; Lange, Nicholas; Lainhart, Janet E.
2015-01-01
Studies have shown that individuals with autism spectrum disorder (ASD) tend to perform significantly below typical developing individuals on standardized measures of memory, even when not significantly different on measures of IQ. The current study sought to examine within ASD whether anatomical correlates of memory performance differed between those with average-to-above-average IQ (AIQ Group) compared to those with low average to borderline ability (LIQ group) as well as in relations to typically-developing comparisons (TDC). Using automated volumetric analyses, we examined regional volume of classic memory areas including the hippocampus, parahippocampal gyrus, entorhinal cortex, and amygdala in an all-male sample AIQ (n = 38) and LIQ (n = 18) individuals with ASD along with 30 typically-developing comparisons (TDC). Memory performance was assessed using the Test of Memory and Learning (TOMAL) compared among the groups and then correlated with regional brain volumes. Analyses revealed group differences on almost all facets of memory and learning as assessed by the various subtests of the TOMAL. The three groups did not differ on any ROI memory-related brain volumes. However, significant size-memory function interactions were observed. Negative correlations were found between the volume of the amygdala and composite, verbal, and delayed memory indices for the LIQ ASD group indicating larger volume related to poorer performance. Implications for general memory functioning and dysfunctional neural connectivity in ASD are discussed. PMID:25749302
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kudish, A.I.; Ianetz, A.
1993-12-01
The authors have utilized concurrently measured global, normal incidence beam, and diffuse radiation data, the latter measured by means of a shadow ring pyranometer to study the relative magnitude of the anisotropic contribution (circumsolar region and nonuniform sky conditions) to the diffuse radiation. In the case of Beer Sheva, the monthly average hourly anisotropic correction factor varies from 2.9 to 20.9%, whereas the [open quotes]standard[close quotes] geometric correction factor varies from 5.6 to 14.0%. The monthly average hourly overall correction factor (combined anisotropic and geometric factors) varies from 8.9 to 37.7%. The data have also been analyzed using a simplemore » model of sky radiance developed by Steven in 1984. His anisotropic correction factor is a function of the relative strength and angular width of the circumsolar radiation region. The results of this analysis are in agreement with those previously reported for Quidron on the Dead Sea, viz. the anisotropy and relative strength of the circumsolar radiation are significantly greater than at any of the sites analyzed by Steven. In addition, the data have been utilized to validate a model developed by LeBaron et al. in 1990 for correcting shadow ring diffuse radiation data. The monthly average deviation between the corrected and true diffuse radiation values varies from 4.55 to 7.92%.« less
Yeh, Hui-Jung; Shih, Tung-Sheng; Tsai, Perng-Jy; Chang, Ho-Yuan
2002-03-01
To determine nationwide 2,4- and 2,6-toluene diisocyanates (TDI) concentrations among polyurethane (PU) resin, PU foam, and other TDI-related industries in Taiwan. The ratios of 2,4-/2,6-TDI and the noncarcinogenic risk among these three industries were also investigated. Personal and fixed-area monitoring of TDI concentrations as well as questionnaires were performed for 26 factories in Taiwan. The modified OHSA 42 method was applied in sampling and analysis. Noncarcinogenic hazard index was estimated for these three industries based on the average concentration measurements. Significant differences of TDI concentrations were found among the three industry categories. For personal monitoring, PU foam was found to have the highest TDI levels [18.6 (+/-33.6) and 22.1 (+/-42.3) ppb for 2,4- and 2,6-TDI], Others average [8.3 (+/-18.9) and 10.2 (+/-17.2) ppb], and PU resin lowest [2.0 (+/-3.5) and 0.7 (+/-1.2) ppb]. The estimated average hazard indices were found to be 310-3310. A substantial percentage of airborne TDI concentrations among in Taiwan industries exceeded current TDI occupational exposure limit, and significant difference of TDI levels were found among the three industry categories. The control remedy for the tasks of charging and foaming should be enforced with the highest priority. A separate 2,6-TDI exposure standard is warranted.
SOCIETAL COSTS ASSOCIATED WITH NEOVASCULAR AGE-RELATED MACULAR DEGENERATION IN THE UNITED STATES.
Brown, Melissa M; Brown, Gary C; Lieske, Heidi B; Tran, Irwin; Turpcu, Adam; Colman, Shoshana
2016-02-01
The purpose of this study was to use a cross-sectional prevalence-based health care economic survey to ascertain the annual, incremental, societal ophthalmic costs associated with neovascular age-related macular degeneration. Consecutive patients (n = 200) with neovascular age-related macular degeneration were studied. A Control Cohort included patients with good (20/20-20/25) vision, while Study Cohort vision levels included Subcohort 1: 20/30 to 20/50, Subcohort 2: 20/60 to 20/100, Subcohort 3: 20/200 to 20/400, and Subcohort 4: 20/800 to no light perception. An interviewer-administered, standardized, written survey assessed 1) direct ophthalmic medical, 2) direct nonophthalmic medical, 3) direct nonmedical, and 4) indirect medical costs accrued due solely to neovascular age-related macular degeneration. The mean annual societal cost for the Control Cohort was $6,116 and for the Study Cohort averaged $39,910 (P < 0.001). Study Subcohort 1 costs averaged $20,339, while Subcohort 4 costs averaged $82,984. Direct ophthalmic medical costs comprised 17.9% of Study Cohort societal ophthalmic costs, versus 74.1% of Control Cohort societal ophthalmic costs (P < 0.001) and 10.4% of 20/800 to no light perception subcohort costs. Direct nonmedical costs, primarily caregiver, comprised 67.1% of Study Cohort societal ophthalmic costs, versus 21.3% ($1,302/$6,116) of Control Cohort costs (P < 0.001) and 74.1% of 20/800 to no light perception subcohort costs. Total societal ophthalmic costs associated with neovascular age-related macular degeneration dramatically increase as vision in the better-seeing eye decreases.
Ellegaard, Mai-Britt Bjørklund; Grau, Cai; Zachariae, Robert; Jensen, Anders Bonde
2017-08-01
Follow-up after breast cancer treatment is standard due to the risk of development of new primary cancers and recurrent disease. The aim of the present study was to evaluate a standard follow-up program in an oncological department by assessing: (1) Symptoms or signs of new primary cancer or recurrent disease, (2) Disease- and treatment-related physical and psychosocial side or late effects, and (3) relevant actions by oncology staff. In a cross-sectional study, 194 women who came for follow-up visit after treatment for primary surgery were included. The clinical oncologists registered symptoms and signs of recurrent disease or new primary cancer. Side or late effects were both assessed by patient and the clinical oncologists. Loco-regional or distant signs of recurrent disease were suspected in eight (5%) patients. Further examinations revealed no disease recurrence. Most patients (93%) reported some degree of side or late effects. Statistically significant more side or late effects were reported by the women (average: 6.9) than registered by the clinical oncologists (average: 2.4), p < 0.001. The three most often patient-reported side or late effects were hot flushes (35%), fatigue (32%), and sleep disturbance (31%). None of the scheduled or additional visits resulted in detection of recurrent disease. Furthermore, the majority of patients reported side or late effects. Statistically significant more women reported side or late effects than registered by the clinical oncologists. This suggests the need for rethinking of the follow-up programs with more emphasis upon side or late effects of the treatment.
Strategies to Prevent MRSA Transmission in Community-Based Nursing Homes: A Cost Analysis.
Roghmann, Mary-Claire; Lydecker, Alison; Mody, Lona; Mullins, C Daniel; Onukwugha, Eberechukwu
2016-08-01
OBJECTIVE To estimate the costs of 3 MRSA transmission prevention scenarios compared with standard precautions in community-based nursing homes. DESIGN Cost analysis of data collected from a prospective, observational study. SETTING AND PARTICIPANTS Care activity data from 401 residents from 13 nursing homes in 2 states. METHODS Cost components included the quantities of gowns and gloves, time to don and doff gown and gloves, and unit costs. Unit costs were combined with information regarding the type and frequency of care provided over a 28-day observation period. For each scenario, the estimated costs associated with each type of care were summed across all residents to calculate an average cost and standard deviation for the full sample and for subgroups. RESULTS The average cost for standard precautions was $100 (standard deviation [SD], $77) per resident over a 28-day period. If gown and glove use for high-risk care was restricted to those with MRSA colonization or chronic skin breakdown, average costs increased to $137 (SD, $120) and $125 (SD, $109), respectively. If gowns and gloves were used for high-risk care for all residents in addition to standard precautions, the average cost per resident increased substantially to $223 (SD, $127). CONCLUSIONS The use of gowns and gloves for high-risk activities with all residents increased the estimated cost by 123% compared with standard precautions. This increase was ameliorated if specific subsets (eg, those with MRSA colonization or chronic skin breakdown) were targeted for gown and glove use for high-risk activities. Infect Control Hosp Epidemiol 2016;37:962-966.
Ferris, H; Hunt, W A
1979-04-01
The development and productivity of parasitic stages of Meloidogyne arenaria were quantitatively defined in 14 varieties or rootstocks of grapevine. Mean development to maturity was related linearly to the number of degree-hours above 10 C temperature experienced from the time of penetration in all cultivars in which nematode adulthood was achieved. Averaged across varieties, 13,142 heat units were required for development of the mean individual to maturity. The standard deviation of the developing individuals about the mean, expressed as a proportion with 1 representing adulthood, did not differ with time or among varieties after 7,000 degree-hours had elapsed. Earliest egg production was observed after 7,662 degree-hours, averaged across varieties, considerably before mean development to maturity. Varieties were also ranked relative to the number of larvae establishing infection sites, and the rate of egg production per adult female. Varieties could he grouped according to their levels of horizontal resistance.
Dispersion of Heat Flux Sensors Manufactured in Silicon Technology.
Ziouche, Katir; Lejeune, Pascale; Bougrioua, Zahia; Leclercq, Didier
2016-06-09
In this paper, we focus on the dispersion performances related to the manufacturing process of heat flux sensors realized in CMOS (Complementary metal oxide semi-conductor) compatible 3-in technology. In particular, we have studied the performance dispersion of our sensors and linked these to the physical characteristics of dispersion of the materials used. This information is mandatory to ensure low-cost manufacturing and especially to reduce production rejects during the fabrication process. The results obtained show that the measured sensitivity of the sensors is in the range 3.15 to 6.56 μV/(W/m²), associated with measured resistances ranging from 485 to 675 kΩ. The dispersions correspond to a Gaussian-type distribution with more than 90% determined around average sensitivity S e ¯ = 4.5 µV/(W/m²) and electrical resistance R ¯ = 573.5 kΩ within the interval between the average and, more or less, twice the relative standard deviation.
Zhang, Yu-ge; Xiao, Min; Dong, Yi-hua; Jiang, Yong
2012-08-01
A method to determine soil exchangeable calcium (Ca), magnesium (Mg), potassium (K), and sodium (Na) by using atomic absorption spectrophotometer (AAS) and extraction with ammonium acetate was developed. Results showed that the accuracy of exchangeable base cation data with AAS method fits well with the national standard referential soil data. The relative errors for parallel samples of exchangeable Ca and Mg with 66 pair samples ranged from 0.02%-3.14% and 0.06%-4.06%, and averaged to be 1.22% and 1.25%, respectively. The relative errors for exchangeable K and Na with AAS and flame photometer (FP) ranged from 0.06%-8.39% and 0.06-1.54, and averaged to be 3.72% and 0.56%, respectively. A case study showed that the determination method for exchangeable base cations by using AAS was proven to be reliable and trustable, which could reflect the real situation of soil cation exchange properties in farmlands.
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Operational frequency stability of rubidium and cesium frequency standards
NASA Technical Reports Server (NTRS)
Lavery, J. E.
1973-01-01
The frequency stabilities under operational conditions of several commercially available rubidium and cesium frequency standards were determined from experimental data for frequency averaging times from 10 to the 7th power s and are presented in table and graph form. For frequency averaging times between 10 to the 5th power and 10 to the 7th power s, the rubidium standards tested have a stability of between 10 to the minus 12th power and 5 x 10 to the minus 12th power, while the cesium standards have a stability of between 2 x 10 to the minus 13th power and 5 x 10 to the minus 13th power.
Statistical density modification using local pattern matching
Terwilliger, Thomas C.
2007-01-23
A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.
In-depth analysis and discussions of water absorption-typed high power laser calorimeter
NASA Astrophysics Data System (ADS)
Wei, Ji Feng
2017-02-01
In high-power and high-energy laser measurement, the absorber materials can be easily destroyed under long-term direct laser irradiation. In order to improve the calorimeter's measuring capacity, a measuring system directly using water flow as the absorber medium was built. The system's basic principles and the designing parameters of major parts were elaborated. The system's measuring capacity, the laser working modes, and the effects of major parameters were analyzed deeply. Moreover, the factors that may affect the accuracy of measurement were analyzed and discussed. The specific control measures and methods were elaborated. The self-calibration and normal calibration experiments show that this calorimeter has very high accuracy. In electrical calibration, the average correction coefficient is only 1.015, with standard deviation of only 0.5%. In calibration experiments, the standard deviation relative to a middle-power standard calorimeter is only 1.9%.
Capó-Juan, Miguel Ángel; Fiol-Delgado, Rosa Mª; Alzamora-Perelló, Mª Magdalena; Bosch-Gutiérrez, Marta; Serna-López, Lucía; Bennasar-Veny, Miguel; Aguiló-Pons, Antonio; De Pedro-Gómez, Joan E
2016-11-10
Public Service Promotion of Personal Autonomy aims to provide care to users with severe physical and/or physical-mental disabilities, including people with spinal cord injury. These users are in a chronic phase and thus they require educational-therapeutic measures of physiotherapy. This study is meant to determine the satisfaction of people with spinal cord injury who attend this service. A descriptive, cross-sectional, quantitative study in the Public Service Promotion of Personal Autonomy after a sixteen-month therapeutic monitoring process was carried out, which began in March 2015. The final study sample was 25 people with spinal cord injury (17 men and 8 women). At the end of therapeutic intervention, users responded to the SERVQHOS questionnaire, which consists in nineteen questions that aim to measure the quality of the care services provided. A statistical analysis was conducted, calculating averages and standard deviations or frecuencies and percentages. The best valued external factor was the staff appearance with 4,5 on average and the worst scored external factor was the ease of access and / or signposting of the center with 2,6 on average. On the other hand, the best valued internal factor was the kindness of the staff with 4,8 on average and the worst scored factor was the speed in which the users receive what they requested with 4,2 on average. We concluded that the quality offered is determined by internal factors (kindness, trust, willingness to help) and weaknesses are related to structural factors of the center (external factors).
40 CFR 463.34 - New source performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... performance standards (i.e., mass of pollutant discharged), which are calculated by multiplying the average... 40 Protection of Environment 31 2012-07-01 2012-07-01 false New source performance standards. 463... GUIDELINES AND STANDARDS (CONTINUED) PLASTICS MOLDING AND FORMING POINT SOURCE CATEGORY Finishing Water...
40 CFR 463.24 - New source performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... performance standards (i.e., mass of pollutant discharged) calculated by multiplying the average process water... 40 Protection of Environment 31 2012-07-01 2012-07-01 false New source performance standards. 463... GUIDELINES AND STANDARDS (CONTINUED) PLASTICS MOLDING AND FORMING POINT SOURCE CATEGORY Cleaning Water...
Establishment of gold-quartz standard GQS-1
Millard, Hugh T.; Marinenko, John; McLane, John E.
1969-01-01
A homogeneous gold-quartz standard, GQS-1, was prepared from a heterogeneous gold-bearing quartz by chemical treatment. The concentration of gold in GQS-1 was determined by both instrumental neutron activation analysis and radioisotope dilution analysis to be 2.61?0.10 parts per million. Analysis of 10 samples of the standard by both instrumental neutron activation analysis and radioisotope dilution analysis failed to reveal heterogeneity within the standard. The precision of the analytical methods, expressed as standard error, was approximately 0.1 part per million. The analytical data were also used to estimate the average size of gold particles. The chemical treatment apparently reduced the average diameter of the gold particles by at least an order of magnitude and increased the concentration of gold grains by a factor of at least 4,000.
Zwanenburg, Jaco JM; Reinink, Rik; Wisse, Laura EM; Luijten, Peter R; Kappelle, L Jaap; Geerlings, Mirjam I; Biessels, Geert Jan
2016-01-01
Cerebral perivascular spaces (PVS) are small physiological structures around blood vessels in the brain. MRI visible PVS are associated with ageing and cerebral small vessel disease (SVD). 7 Tesla (7T) MRI improves PVS detection. We investigated the association of age, vascular risk factors, and imaging markers of SVD with PVS counts on 7 T MRI, in 50 persons aged ≥ 40. The average PVS count ± SD in the right hemisphere was 17 ± 6 in the basal ganglia and 71 ± 28 in the semioval centre. We observed no relation between age or vascular risk factors and PVS counts. The presence of microbleeds was related to more PVS in the basal ganglia (standardized beta 0.32; p = 0.04) and semioval centre (standardized beta 0.39; p = 0.01), and white matter hyperintensity volume to more PVS in the basal ganglia (standardized beta 0.41; p = 0.02). We conclude that PVS counts on 7T MRI are high and are related SVD markers, but not to age and vascular risk factors. This latter finding may indicate that due to the high sensitivity of 7T MRI, the correlation of PVS counts with age or vascular risk factors may be attenuated by the detection of “normal”, non-pathological PVS. PMID:27154503
Design and Uncertainty Analysis for a PVTt Gas Flow Standard
Wright, John D.; Johnson, Aaron N.; Moldover, Michael R.
2003-01-01
A new pressure, volume, temperature, and, time (PVTt) primary gas flow standard at the National Institute of Standards and Technology has an expanded uncertainty (k = 2) of between 0.02 % and 0.05 %. The standard spans the flow range of 1 L/min to 2000 L/min using two collection tanks and two diverter valve systems. The standard measures flow by collecting gas in a tank of known volume during a measured time interval. We describe the significant and novel features of the standard and analyze its uncertainty. The gas collection tanks have a small diameter and are immersed in a uniform, stable, thermostatted water bath. The collected gas achieves thermal equilibrium rapidly and the uncertainty of the average gas temperature is only 7 mK (22 × 10−6 T). A novel operating method leads to essentially zero mass change in and very low uncertainty contributions from the inventory volume. Gravimetric and volume expansion techniques were used to determine the tank and the inventory volumes. Gravimetric determinations of collection tank volume made with nitrogen and argon agree with a standard deviation of 16 × 10−6 VT. The largest source of uncertainty in the flow measurement is drift of the pressure sensor over time, which contributes relative standard uncertainty of 60 × 10−6 to the determinations of the volumes of the collection tanks and to the flow measurements. Throughout the range 3 L/min to 110 L/min, flows were measured independently using the 34 L and the 677 L collection systems, and the two systems agreed within a relative difference of 150 × 10−6. Double diversions were used to evaluate the 677 L system over a range of 300 L/min to 1600 L/min, and the relative differences between single and double diversions were less than 75 × 10−6. PMID:27413592
Pan, Xinglu; Dong, Fengshou; Xu, Jun; Liu, Xingang; Chen, Zenglong; Liu, Na; Chen, Xixi; Tao, Yan; Zhang, Hongjun; Zheng, Yongquan
2015-05-01
A reliable and sensitive isotope-labelled internal standard method for simultaneous determination of chlorantraniliprole and cyantraniliprole in fruits (apple and grape), vegetables (cucumber and tomato) and cereals (rice and wheat) using ultra-high-performance liquid chromatography-tandem mass spectrometry was developed. Isotope-labelled internal standards were effective in compensating for the loss in the pretreatment and overcoming the matrix effect. The analytes were extracted with acetonitrile and cleaned up with different kinds of sorbents. The determination of the target compounds was achieved in less than 4 min using a T3 column combined with an electrospray ionization source in positive mode. The overall average relative recoveries in all matrices at three spiking levels (10, 20 and 50 μg kg(-1)) ranged from 95.5 to 106.2 %, with all relative standard deviations being less than 14.4 % for all analytes. The limits of detection did not exceed 0.085 μg kg(-1) and the limits of quantification were below 0.28 μg kg(-1) in all matrices. The method was demonstrated to be convenient and accurate for the routine monitoring of chlorantraniliprole and cyantraniliprole in fruits, vegetables and cereals.
On the Relation Between Sunspot Area and Sunspot Number
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Often, the relation between monthly or yearly averages of total sunspot area, A, and sunspot number, R, has been described using the formula A = 16.7 R. Such a simple relation, however, is erroneous. The yearly ratio of A/R has varied between 5.3 in 1964 to 19.7 in 1926, having a mean of 13.1 with a standard deviation of 3.5. For 1875-1976 (corresponding to the Royal Greenwich Observatory timeframe), the yearly ratio of A/R has a mean of 14.1 with a standard deviation of 3.2, and it is found to differ significantly from the mean for 1977-2004 (corresponding to the United States Air Force/National Oceanic and Atmospheric Administration Solar Optical Observing Network timeframe), which equals 9.8 with a standard deviation of 2.1. Scatterplots of yearly values of A versus R are highly correlated for both timeframes and they suggest that a value of R = 100 implies A=1,538 +/- 174 during the first timeframe, but only A=1,076 +/- 123 for the second timeframe. Comparison of the yearly ratios adjusted for same day coverage against yearly ratios using Rome Observatory measures for the interval 1958-1998 indicates that sunspot areas during the second timeframe are inherently too low.
Bioavailability of zinc in two zinc sulfate by-products of the galvanizing industry.
Edwards, H M; Boling, S D; Emmert, J L; Baker, D H
1998-10-01
Two Zn depletion/repletion assays were conducted with chicks to determine the relative bioavailability (RBV) of Zn from two new by-products of the galvanizing industry. Using a soy concentrate-dextrose diet, slope-ratio methodology was employed to evaluate two different products: Fe-ZnSO4 x H2O with 20.2% Fe and 13.0% Zn, and Zn-FeSO4 x H2O with 14.2% Fe and 20.2% Zn. Feed-grade ZnSO4 x H2O was used as a standard. Weight gain, tibia Zn concentration, and total tibia Zn responded linearly (P < 0.01) to Zn supplementation from all three sources. Slope-ratio calculations based on weight gain established average Zn RBV values of 98% for Fe-ZnSO4 x H2O and 102% for Zn-FeSO4 x H2O, and these values were not different (P > 0.10) from the ZnSO4 standard (100%). Slope-ratio calculations based on total tibia Zn established average Zn RBV values of 126% for Fe-ZnSO4 x H2O and 127% for Zn-FeSO4 x H2O, and these values were greater (P < 0.01) than those of the ZnSO4 standard (100%). It is apparent that both mixed sulfate products of Fe and Zn are excellent sources of bioavailable Zn.
Experimental comparison of icing cloud instruments
NASA Technical Reports Server (NTRS)
Olsen, W.; Takeuchi, D. M.; Adams, K.
1983-01-01
Icing cloud instruments were tested in the spray cloud Icing Research Tunnel (IRT) in order to determine their relative accuracy and their limitations over a broad range of conditions. It was found that the average of the readings from each of the liquid water content (LWC) instruments tested agreed closely with each other and with the IRT calibration; but all have a data scatter (+ or - one standard deviation) of about + or - 20 percent. The effect of this + or - 20 percent uncertainty is probably acceptable in aero-penalty and deicer experiments. Existing laser spectrometers proved to be too inaccurate for LWC measurements. The error due to water runoff was the same for all ice accretion LWC instruments. Any given laser spectrometer proved to be highly repeatable in its indications of volume median drop size (DVM), LWC and drop size distribution. However, there was a significant disagreement between different spectrometers of the same model, even after careful standard calibration and data analysis. The scatter about the mean of the DVM data from five Axial Scattering Spectrometer Probes was + or - 20 percent (+ or - one standard deviation) and the average was 20 percent higher than the old IRT calibration. The + or - 20 percent uncertainty in DVM can cause an unacceptable variation in the drag coefficient of an airfoil with ice; however, the variation in a deicer performance test may be acceptable.
Assessing the Genetics Content in the Next Generation Science Standards.
Lontok, Katherine S; Zhang, Hubert; Dougherty, Michael J
2015-01-01
Science standards have a long history in the United States and currently form the backbone of efforts to improve primary and secondary education in science, technology, engineering, and math (STEM). Although there has been much political controversy over the influence of standards on teacher autonomy and student performance, little light has been shed on how well standards cover science content. We assessed the coverage of genetics content in the Next Generation Science Standards (NGSS) using a consensus list of American Society of Human Genetics (ASHG) core concepts. We also compared the NGSS against state science standards. Our goals were to assess the potential of the new standards to support genetic literacy and to determine if they improve the coverage of genetics concepts relative to state standards. We found that expert reviewers cannot identify ASHG core concepts within the new standards with high reliability, suggesting that the scope of content addressed by the standards may be inconsistently interpreted. Given results that indicate that the disciplinary core ideas (DCIs) included in the NGSS documents produced by Achieve, Inc. clarify the content covered by the standards statements themselves, we recommend that the NGSS standards statements always be viewed alongside their supporting disciplinary core ideas. In addition, gaps exist in the coverage of essential genetics concepts, most worryingly concepts dealing with patterns of inheritance, both Mendelian and complex. Finally, state standards vary widely in their coverage of genetics concepts when compared with the NGSS. On average, however, the NGSS support genetic literacy better than extant state standards.
Bohannon, Richard W; Bear-Lehman, Jane; Desrosiers, Johanne; Massy-Westropp, Nicola; Mathiowetz, Virgil
2007-01-01
Although strength diminishes with age, average values for grip strength have not been available heretofore for discrete strata after 75 years. The purpose of this meta-analysis was to provide average values for the left and right hands of men and women 75-79, 80-84, 85-89, and 90-99 years. Contributing to the analysis were 7 studies and 739 subjects with whom the Jamar dynamometer and standard procedures were employed. Based on the analysis, average values for the left and right hands of men and women in each age stratum were derived. The derived values can serve as a standard of comparison for individual patients. An individual whose grip strength is below the lower limit of the confidence intervals of each stratum can be confidently considered to have less than average grip strength.
Benson, Nsikak U; Akintokun, Oyeronke A; Adedapo, Adebusayo E
2017-01-01
Levels of trihalomethanes (THMs) in drinking water from water treatment plants (WTPs) in Nigeria were studied using a gas chromatograph (GC Agilent 7890A with autosampler Agilent 7683B) equipped with electron capture detector (ECD). The mean concentrations of the trihalomethanes ranged from zero in raw water samples to 950 μ g/L in treated water samples. Average concentration values of THMs in primary and secondary disinfection samples exceeded the standard maximum contaminant levels. Results for the average THMs concentrations followed the order TCM > BDCM > DBCM > TBM. EPA-developed models were adopted for the estimation of chronic daily intakes (CDI) and excess cancer incidence through ingestion pathway. Higher average intake was observed in adults (4.52 × 10 -2 mg/kg-day), while the ingestion in children (3.99 × 10 -2 mg/kg-day) showed comparable values. The total lifetime cancer incidence rate was relatively higher in adults than children with median values 244 and 199 times the negligible risk level.
Ranking and averaging independent component analysis by reproducibility (RAICAR).
Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping
2008-06-01
Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.
Ranibizumab treatment in age-related macular degeneration: a meta-analysis of one-year results.
Gerding, H
2014-04-01
Although ranibizumab is widely used in age-related macular degeneration there is no systematic data available on the relation between treatment frequency and functional efficacy within the first 12 months of follow-up. A meta-analysis was performed on available MEDLINE literature. 47 relevant clinical studies (54 case series) could be identified covering 11706 treated eyes. Non-linear and linear regressions were calculated for the relation between treatment frequency and functional outcome (average gain in visual acuity, % of eyes losing less than 15 letters of visual acuity, % of eyes gaining ≥ 15 letters) within the first year of care. Mean improvement of average visual gain was +4.9 ± 3.6 (mean ± 1 standard deviation) letters (case-weighted: 3.3 letters). The average number of ranibizumab injections until month 12 was 6.3 ± 2.0 (case-weighted: 5.9). 92.4 ± 3.9% of eyes (case-weighted: 91.9%) lost less than three lines of visual acuity and 24.5 ± 8.2% (case-weighted: 23.3) gained more than 3 lines within the first year. Analysis of the relation between the number of injections and functional improvement indicated best fit for non-linear equations. A nearly stepwise improvement of functional gain occurred between 6.8 and 7.2 injections/year. A saturation effect of treatment occurred at higher injection frequency. The results of this meta-analysis clearly indicate a non-linear relation between the number of injections and functional gain of ranibizumab within the first 12 months of treatment. Treatment saturation seems to occur at a treatment frequency >7.2 injections within the first 12 months. Georg Thieme Verlag KG Stuttgart · New York.
Sabonghy, Eric Peter; Wood, Robert Michael; Ambrose, Catherine Glauber; McGarvey, William Christopher; Clanton, Thomas Oscar
2003-03-01
Tendon transfer techniques in the foot and ankle are used for tendon ruptures, deformities, and instabilities. This fresh cadaver study compares the tendon fixation strength in 10 paired specimens by performing a tendon to tendon fixation technique or using 7 x 20-25 mm bioabsorbable interference-fit screw tendon fixation technique. Load at failure of the tendon to tendon fixation method averaged 279N (Standard Deviation 81N) and the bioabsorbable screw 148N (Standard Deviation 72N) [p = 0.0008]. Bioabsorbable interference-fit screws in these specimens show decreased fixation strength relative to the traditional fixation technique. However, the mean bioabsorbable screw fixation strength of 148N provides physiologic strength at the tendon-bone interface.
Zhou, L.; Chao, T.T.; Meier, A.L.
1984-01-01
The sample is fused with lithium metaborate and the melt is dissolved in 15% (v/v) hydrobromic acid. Iron(III) is reduced with ascorbic acid to avoid its coextraction with indium as the bromide into methyl isobutyl ketone. Impregnation of the graphite furnace with sodium tungstate, and the presence of lithium metaborate and ascorbic acid in the reaction medium improve the sensitivity and precision. The limits of determination are 0.025-16 mg kg-1 indium in the sample. For 22 geological reference samples containing more than 0.1 mg kg-1 indium, relative standard deviations ranged from 3.0 to 8.5% (average 5.7%). Recoveries of indium added to various samples ranged from 96.7 to 105.6% (average 100.2%). ?? 1984.
An atomic-force-microscopy study of the structure of surface layers of intact fibroblasts
NASA Astrophysics Data System (ADS)
Khalisov, M. M.; Ankudinov, A. V.; Penniyaynen, V. A.; Nyapshaev, I. A.; Kipenko, A. V.; Timoshchuk, K. I.; Podzorova, S. A.; Krylov, B. V.
2017-02-01
Intact embryonic fibroblasts on a collagen-treated substrate have been studied by atomic-force microscopy (AFM) using probes of two types: (i) standard probes with tip curvature radii of 2-10 nm and (ii) special probes with a calibrated 325-nm SiO2 ball radius at the tip apex. It is established that, irrespective of probe type, the average maximum fibroblast height is on a level of 1.7 μm and the average stiffness of the probe-cell contact amounts to 16.5 mN/m. The obtained AFM data reveal a peculiarity of the fibroblast structure, whereby its external layers move as a rigid shell relative to the interior and can be pressed inside to a depth dependent on the load only.
[Biomechanical significance of the acetabular roof and its reaction to mechanical injury].
Domazet, N; Starović, D; Nedeljković, R
1999-01-01
The introduction of morphometry into the quantitative analysis of the bone system and functional adaptation of acetabulum to mechanical damages and injuries enabled a relatively simple and acceptable examination of morphological acetabular changes in patients with damaged hip joints. Measurements of the depth and form of acetabulum can be done by radiological methods, computerized tomography and ultrasound (1-9). The aim of the study was to obtain data on the behaviour of acetabular roof, the so-called "eyebrow", by morphometric analyses during different mechanical injuries. Clinical studies of the effect of different loads on acetabular roof were carried out in 741 patients. Radiographic findings of 400 men and 341 women were analysed. The control group was composed of 148 patients with normal hip joints. Average age of the patients was 54.7 years and that of control subjects 52.0 years. Data processing was done for all examined patients. On the basis of our measurements the average size of female "eyebrow" ranged from 24.8 mm to 31.5 mm with standard deviation of 0.93 and in men from 29.4 mm to 40.3 mm with standard deviation of 1.54. The average size in the whole population was 32.1 mm with standard deviation of 15.61. Statistical analyses revealed high correlation coefficients between the age and "eyebrow" size in men (r = 0.124; p < 0.05); it was statically in inverse proportion (Graph 1). However, in female patients the correlation coefficient was statistically significant (r = 0.060; p > 0.05). The examination of the size of collodiaphysial angle and length of "eyebrow" revealed that "eyebrow" length was in inverse proportion to the size of collodiaphysial angle (r = 0.113; p < 0.05). The average "eyebrow" length in relation to the size of collodiaphysial angle ranged from 21.3 mm to 35.2 mm with standard deviation of 1.60. There was no statistically significant correlation between the "eyebrow" size and Wiberg's angle in male (r = 0.049; p > 0.05) and female (r = 0.005; p > 0.05) patients. The "eyebrow" length was proportionally dependent on the size of the shortened extremity in all examined subjects. This dependence was statistically significant both in female (r = 0.208; p < 0.05) and male (r = 0.193; p < 0.05) patients. The study revealed that fossa acetabuli was forward and downward laterally directed. The size, form and cross-section of acetabulum changed during different loads. Dimensions and morphological changes in acetabulum showed some but unimportant changes in comparison to that in the control group. These findings are graphically presented in Figure 5 and numerically in Tables 1 and 2. The study of spatial orientation among hip joints revealed that fossa acetabuli was forward and downward laterally directed; this was in accordance with results other authors (1, 7, 9, 15, 18). There was a statistically significant difference in relation to the "eyebrow" size between patients and normal subjects (t = 3.88; p < 0.05). The average difference of "eyebrow" size was 6.892 mm. A larger "eyebrow" was found in patients with normally loaded hip. There was also a significant difference in "eyebrow" size between patients and healthy female subjects (t = 4.605; p < 0.05). A larger "eyebrow" of 8.79 mm was found in female subjects with normally loaded hip. On the basis of our study it can be concluded that the findings related to changes in acetabular roof, the so-called "eyebrow", are important in diagnosis, follow-up and therapy of pathogenetic processes of these disorders.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
... standard (NAAQS). This extension is based in part on air quality data for the 4th highest daily 8-hour... attainment date if: (a) For the first one-year extension, the area's 4th highest daily 8-hour average in the... 4th highest daily 8-hour value, averaged over both the original attainment year and the first...
40 CFR 463.24 - New source performance standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standards (i.e., mass of pollutant discharged) calculated by multiplying the average process water usage... 40 Protection of Environment 30 2011-07-01 2011-07-01 false New source performance standards. 463... GUIDELINES AND STANDARDS PLASTICS MOLDING AND FORMING POINT SOURCE CATEGORY Cleaning Water Subcategory § 463...
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
DOT National Transportation Integrated Search
2005-02-01
Annual average PM10 concentrations at the Greenwood monitoring station in western Phoenix have : exceeded EPAs annual average air quality standard and are higher on average than values observed at the : West Phoenix monitor, which is located just ...
42 CFR 423.286 - Rules regarding premiums.
Code of Federal Regulations, 2011 CFR
2011-10-01
... section for the difference between the bid and the national average monthly bid amount, any supplemental... percentage as specified in paragraph (b) of this section; and (2) National average monthly bid amount... reflect difference between bid and national average bid. If the amount of the standardized bid amount...
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Yuta, Atsushi; Ukai, Kotaro; Sakakura, Yasuo; Tani, Hideshi; Matsuda, Fukiko; Yang, Tian-qun; Majima, Yuichi
2002-07-01
We made a prediction of the Japanese cedar (Cryptomeria japonica) pollen counts at Tsu city based on male flower-setting conditions of standard trees. The 69 standard trees from 23 kinds of clones, planted at Mie Prefecture Science and Technology Promotion Center (Hakusan, Mie) in 1964, were selected. Male flower-setting conditions for 276 faces (69 trees x 4 points of the compass) were scored from 0 to 3. The average of scores and total pollen counts from 1988 to 2000 was analyzed. As the results, the average scores from standard trees and total pollen counts except two mass pollen-scattered years in 1995 and 2000 had a positive correlation (r = 0.914) by linear function. On the mass pollen-scattered years, pollen counts were influenced from the previous year. Therefore, the score of the present year minus that of the previous year were used for analysis. The average scores from male flower-setting conditions and pollen counts had a strong positive correlation (r = 0.994) when positive scores by taking account of the previous year were analyzed. We conclude that prediction of pollen counts are possible based on the male flower-setting conditions of standard trees.
Determination of the anxiety level of women who present for mammography.
Bölükbaş, Nurgül; Erbil, Nülüfer; Kahraman, Azize Nuran
2010-01-01
This paper was to examine the role of anxiety in mammography screening. Breast cancer screening with mammography has been shown to be effective for preventing breast cancer death. However mammography screening can be harmful to women. One of the major problems is anxiety or lack of peace of mind in mammography screening. This study was conducted between November 3, 2007, and December 30, 2007, in Ordu Maternity and Childbirth Hospital. 93 women participated in the study. A 23-item questionnaire and the 20-item State Anxiety Inventory, developed by Spielberger et al. were completed by the participants. All numerical values were given as average ± standard deviation; p<0.05 was accepted for level of significance. The average age of the participants was 47.83 ± 7.50, the average age at marriage was 20.03 ± 4.18, the average birth number 2.91 ± 1.21, and the average age at menopause was 46.10 ± 4.70. The average anxiety level was found to be 46.20 ± 4.9. Significant differences (p<0.05) were found between education level, age at marriage, status of doing breast self examination, status of having a mammography for a breast-related complaint, and the number of mammograms done. It was determined that women who had mammography had a moderate level of anxiety.
Health benefits from large-scale ozone reduction in the United States.
Berman, Jesse D; Fann, Neal; Hollingsworth, John W; Pinkerton, Kent E; Rom, William N; Szema, Anthony M; Breysse, Patrick N; White, Ronald H; Curriero, Frank C
2012-10-01
Exposure to ozone has been associated with adverse health effects, including premature mortality and cardiopulmonary and respiratory morbidity. In 2008, the U.S. Environmental Protection Agency (EPA) lowered the primary (health-based) National Ambient Air Quality Standard (NAAQS) for ozone to 75 ppb, expressed as the fourth-highest daily maximum 8-hr average over a 24-hr period. Based on recent monitoring data, U.S. ozone levels still exceed this standard in numerous locations, resulting in avoidable adverse health consequences. We sought to quantify the potential human health benefits from achieving the current primary NAAQS standard of 75 ppb and two alternative standard levels, 70 and 60 ppb, which represent the range recommended by the U.S. EPA Clean Air Scientific Advisory Committee (CASAC). We applied health impact assessment methodology to estimate numbers of deaths and other adverse health outcomes that would have been avoided during 2005, 2006, and 2007 if the current (or lower) NAAQS ozone standards had been met. Estimated reductions in ozone concentrations were interpolated according to geographic area and year, and concentration-response functions were obtained or derived from the epidemiological literature. We estimated that annual numbers of avoided ozone-related premature deaths would have ranged from 1,410 to 2,480 at 75 ppb to 2,450 to 4,130 at 70 ppb, and 5,210 to 7,990 at 60 ppb. Acute respiratory symptoms would have been reduced by 3 million cases and school-loss days by 1 million cases annually if the current 75-ppb standard had been attained. Substantially greater health benefits would have resulted if the CASAC-recommended range of standards (70-60 ppb) had been met. Attaining a more stringent primary ozone standard would significantly reduce ozone-related premature mortality and morbidity.
Health Benefits from Large-Scale Ozone Reduction in the United States
Berman, Jesse D.; Fann, Neal; Hollingsworth, John W.; Pinkerton, Kent E.; Rom, William N.; Szema, Anthony M.; Breysse, Patrick N.; White, Ronald H.
2012-01-01
Background: Exposure to ozone has been associated with adverse health effects, including premature mortality and cardiopulmonary and respiratory morbidity. In 2008, the U.S. Environmental Protection Agency (EPA) lowered the primary (health-based) National Ambient Air Quality Standard (NAAQS) for ozone to 75 ppb, expressed as the fourth-highest daily maximum 8-hr average over a 24-hr period. Based on recent monitoring data, U.S. ozone levels still exceed this standard in numerous locations, resulting in avoidable adverse health consequences. Objectives: We sought to quantify the potential human health benefits from achieving the current primary NAAQS standard of 75 ppb and two alternative standard levels, 70 and 60 ppb, which represent the range recommended by the U.S. EPA Clean Air Scientific Advisory Committee (CASAC). Methods: We applied health impact assessment methodology to estimate numbers of deaths and other adverse health outcomes that would have been avoided during 2005, 2006, and 2007 if the current (or lower) NAAQS ozone standards had been met. Estimated reductions in ozone concentrations were interpolated according to geographic area and year, and concentration–response functions were obtained or derived from the epidemiological literature. Results: We estimated that annual numbers of avoided ozone-related premature deaths would have ranged from 1,410 to 2,480 at 75 ppb to 2,450 to 4,130 at 70 ppb, and 5,210 to 7,990 at 60 ppb. Acute respiratory symptoms would have been reduced by 3 million cases and school-loss days by 1 million cases annually if the current 75-ppb standard had been attained. Substantially greater health benefits would have resulted if the CASAC-recommended range of standards (70–60 ppb) had been met. Conclusions: Attaining a more stringent primary ozone standard would significantly reduce ozone-related premature mortality and morbidity. PMID:22809899
40 CFR 80.1603 - Gasoline sulfur standards for refiners and importers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Gasoline sulfur standards for refiners... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur § 80.1603 Gasoline sulfur standards for refiners and importers. (a) Sulfur standards—(1) Annual average standard. (i...
7 CFR 31.400 - Samples for wool and wool top grades; method of obtaining.
Code of Federal Regulations, 2010 CFR
2010-01-01
... average and standard deviation of fiber diameter of the bulk sample are within the limits corresponding to... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS PURCHASE OF WOOL AND WOOL TOP SAMPLES § 31.400 Samples for wool...
40 CFR 61.62 - Emission standard for ethylene dichloride plants.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 8 2010-07-01 2010-07-01 false Emission standard for ethylene... Standard for Vinyl Chloride § 61.62 Emission standard for ethylene dichloride plants. (a) Ethylene... used in ethylene dichloride purification is not to exceed 10 ppm (average for 3-hour period), except as...
ERIC Educational Resources Information Center
Larsen, Ralph I.
1973-01-01
Makes recommendations for a single air quality data system (using average time) for interrelating air pollution effects, air quality standards, air quality monitoring, diffusion calculations, source-reduction calculations, and emission standards. (JR)
The fish fauna in tropical rivers: the case of the Sorocaba River basin, São Paulo, Brazil.
Smith, Welber Senteio; Petrere Júnior, Miguel; Barrella, Walter
2003-01-01
A survey was carried out on the fish species in the Sorocaba River basin, the main tributary of the left margin of the Tietê River, located in the State of São Paulo, Brazil. The species were collected with gill nets. After identification of the specimens, their relative abundance, weight and standard length were determined. Up to the present moment there are not any studies that focus this subject in this hydrographic basin. Fifty-three species, distributed in eighteen families and six orders were collected. Characiformes were represented by twenty-eight species, Siluriformes by seventeen species, the Gymnotiformes by three species, Perciformes and Cyprinodontiformes by two species, and the Synbranchiformes by one species. Among the collected species there were two exotic. The most abundant species were Astyanax fasciatus and Hypostomus ancistroides. In relation to total weight the most representative species were Hoplias malabaricus and Hypostomus ancistroides. Cyprinus carpio, Prochilodus lineatus, Schizodon nasutus and Hoplias malabaricus were the most representative species in relation to average weight. Largest standard length were recorded for Sternopygus macrurus, Steindachnerina insculpta, Eigenmannia aff. virescens and Cyprinus carpio.
Standard-Cell, Open-Architecture Power Conversion Systems
2005-10-01
TLmax Maximum junction temperature 423 OK Table 5. 9. PEBB average model description in VTB. Terminal Type Name - 4 -, A Power DC Bus + B Power AC Pole...5 A. Switching models ........................................................................................ 5 B. Average ...11-6 IV. Average Modeling of PEBB-Based Converters...................................................... 11-10 0 IV. 1.Voltage
40 CFR 86.449 - Averaging provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes how... credits may not be banked for use in later model years, except as specified in paragraph (j) of this... average emission levels are at or below the applicable standards in § 86.410-2006. (2) Compliance with the...
24 CFR 51.103 - Criteria and standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-night average sound level produced as the result of the accumulation of noise from all sources contributing to the external noise environment at the site. Day-night average sound level, abbreviated as DNL and symbolized as Ldn, is the 24-hour average sound level, in decibels, obtained after addition of 10...
Analyzing AQP Data to Improve Electronic Flight Bag (EFB) Operations and Training
NASA Technical Reports Server (NTRS)
Seamster, Thomas L.; Kanki, Barbara
2010-01-01
Key points include: Initiate data collection and analysis early in the implementation process. Use data to identify procedural and training refinements. Use a de-identified system to analyze longitudinal data. Use longitudinal I/E data to improve their standardization. Identify above average pilots and crews and use their performance to specify best practices. Analyze below average crew performance data to isolate problems with the training, evaluator standardization and pilot proficiency.
40 CFR 464.34 - New source performance standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... scrubbed) effluent standards for copper, lead, zinc, total phenols, oil and grease, and TSS. For non...) 0.0129 0.0071 Lead (T) 0.0237 0.0116 Zinc (T) 0.0437 0.0165 Oil and grease 1.34 0.446 TSS 0.67 0.536... average Annual average 1 (mg/l) 2 (mg/l) 2 Copper (T) 0.29 0.16 0.0029 Lead (T) 0.53 0.26 0.0067 Zinc (T...
40 CFR 464.34 - New source performance standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... scrubbed) effluent standards for copper, lead, zinc, total phenols, oil and grease, and TSS. For non...) 0.0129 0.0071 Lead (T) 0.0237 0.0116 Zinc (T) 0.0437 0.0165 Oil and grease 1.34 0.446 TSS 0.67 0.536... average Annual average 1 (mg/l) 2 (mg/l) 2 Copper (T) 0.29 0.16 0.0029 Lead (T) 0.53 0.26 0.0067 Zinc (T...
Lalani, Sanam J; Duffield, Tyler C; Trontel, Haley G; Bigler, Erin D; Abildskov, Tracy J; Froehlich, Alyson; Prigge, Molly B D; Travers, Brittany G; Anderson, Jeffrey S; Zielinski, Brandon A; Alexander, Andrew; Lange, Nicholas; Lainhart, Janet E
2018-06-01
Studies have shown that individuals with autism spectrum disorder (ASD) tend to perform significantly below typically developing individuals on standardized measures of attention, even when controlling for IQ. The current study sought to examine within ASD whether anatomical correlates of attention performance differed between those with average to above-average IQ (AIQ group) and those with low-average to borderline ability (LIQ group) as well as in comparison to typically developing controls (TDC). Using automated volumetric analyses, we examined regional volume of classic attention areas including the superior frontal gyrus, anterior cingulate cortex, and precuneus in ASD AIQ (n = 38) and LIQ (n = 18) individuals along with 30 TDC. Auditory attention performance was assessed using subtests of the Test of Memory and Learning (TOMAL) compared among the groups and then correlated with regional brain volumes. Analyses revealed group differences in attention. The three groups did not differ significantly on any auditory attention-related brain volumes; however, trends toward significant size-attention function interactions were observed. Negative correlations were found between the volume of the precuneus and auditory attention performance for the AIQ ASD group, indicating larger volume related to poorer performance. Implications for general attention functioning and dysfunctional neural connectivity in ASD are discussed.
Respiratory hospitalizations in association with fine PM and its ...
Despite observed geographic and temporal variation in particulate matter (PM)-related health morbidities, only a small number of epidemiologic studies have evaluated the relation between PM2.5 chemical constituents and respiratory disease. Most assessments are limited by inadequate spatial and temporal resolution of ambient PM measurements and/or by their approaches to examine the role of specific PM components on health outcomes. In a case-crossover analysis using daily average ambient PM2.5 total mass and species estimates derived from the Community Multiscale Air Quality (CMAQ) model and available observations, we examined the association between the chemical components of PM (including elemental and organic carbon, sulfate, nitrate, ammonium, and other remaining) and respiratory hospitalizations in New York State. We evaluated relationships between levels (low, medium, high) of PM constituent mass fractions, and assessed modification of the PM2.5–hospitalization association via models stratified by mass fractions of both primary and secondary PM components. In our results, average daily PM2.5 concentrations in New York State were generally lower than the 24-hr average National Ambient Air Quality Standard (NAAQS). Year-round analyses showed statistically significant positive associations between respiratory hospitalizations and PM2.5 total mass, sulfate, nitrate, and ammonium concentrations at multiple exposure lags (0.5–2.0% per interquartile range [IQR
Impacts of coal burning on ambient PM2.5 pollution in China
NASA Astrophysics Data System (ADS)
Ma, Qiao; Cai, Siyi; Wang, Shuxiao; Zhao, Bin; Martin, Randall V.; Brauer, Michael; Cohen, Aaron; Jiang, Jingkun; Zhou, Wei; Hao, Jiming; Frostad, Joseph; Forouzanfar, Mohammad H.; Burnett, Richard T.
2017-04-01
High concentration of fine particles (PM2.5), the primary concern about air quality in China, is believed to closely relate to China's large consumption of coal. In order to quantitatively identify the contributions of coal combustion in different sectors to ambient PM2. 5, we developed an emission inventory for the year 2013 using up-to-date information on energy consumption and emission controls, and we conducted standard and sensitivity simulations using the chemical transport model GEOS-Chem. According to the simulation, coal combustion contributes 22 µg m-3 (40 %) to the total PM2. 5 concentration at national level (averaged in 74 major cities) and up to 37 µg m-3 (50 %) in the Sichuan Basin. Among major coal-burning sectors, industrial coal burning is the dominant contributor, with a national average contribution of 10 µg m-3 (17 %), followed by coal combustion in power plants and the domestic sector. The national average contribution due to coal combustion is estimated to be 18 µg m-3 (46 %) in summer and 28 µg m-3 (35 %) in winter. While the contribution of domestic coal burning shows an obvious reduction from winter to summer, contributions of coal combustion in power plants and the industrial sector remain at relatively constant levels throughout the year.
Searching for the Golden Model of Education: Cross-National Analysis of Math Achievement
Bodovski, Katerina; Byun, Soo-yong; Chykina, Volha; Chung, Hee Jin
2017-01-01
We utilized four waves of TIMSS data in addition to the information we have collected on countries’ educational systems to examine whether different degrees of standardization, differentiation, proportion of students in private schools and governmental spending on education influence students’ math achievement, its variation and socioeconomic status (SES) gaps in math achievement. Findings: A higher level of standardization of educational systems was associated with higher average math achievement. Greater expenditure on education (as % of total government expenditure) was associated with a lower level of dispersion of math achievement and smaller SES gaps in math achievement. Wealthier countries exhibited higher average math achievement and a narrower variation. Higher income inequality (measured by Gini index) was associated with a lower average math achievement and larger SES gaps. Further, we found that higher level of standardization alleviates the negative effects of differentiation in the systems with more rigid tracking. PMID:29151667
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.
Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less
Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.; ...
2018-01-09
Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less
Analysis of Realized Volatility for Nikkei Stock Average on the Tokyo Stock Exchange
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya; Watanabe, Toshiaki
2016-04-01
We calculate realized volatility of the Nikkei Stock Average (Nikkei225) Index on the Tokyo Stock Exchange and investigate the return dynamics. To avoid the bias on the realized volatility from the non-trading hours issue we calculate realized volatility separately in the two trading sessions, i.e. morning and afternoon, of the Tokyo Stock Exchange and find that the microstructure noise decreases the realized volatility at small sampling frequency. Using realized volatility as a proxy of the integrated volatility we standardize returns in the morning and afternoon sessions and investigate the normality of the standardized returns by calculating variance, kurtosis and 6th moment. We find that variance, kurtosis and 6th moment are consistent with those of the standard normal distribution, which indicates that the return dynamics of the Nikkei Stock Average are well described by a Gaussian random process with time-varying volatility.
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Barrett, Bruce; Brown, Roger; Mundt, Marlon
2008-02-01
Evaluative health-related quality-of-life instruments used in clinical trials should be able to detect small but important changes in health status. Several approaches to minimal important difference (MID) and responsiveness have been developed. To compare anchor-based and distributional approaches to important difference and responsiveness for the Wisconsin Upper Respiratory Symptom Survey (WURSS), an illness-specific quality of life outcomes instrument. Participants with community-acquired colds self-reported daily using the WURSS-44. Distribution-based methods calculated standardized effect size (ES) and standard error of measurement (SEM). Anchor-based methods compared daily interval changes to global ratings of change, using: (1) standard MID methods based on correspondence to ratings of "a little better" or "somewhat better," and (2) two-level multivariate regression models. About 150 adults were monitored throughout their colds (1,681 sick days.): 88% were white, 69% were women, and 50% had completed college. The mean age was 35.5 years (SD = 14.7). WURSS scores increased 2.2 points from the first to second day, and then dropped by an average of 8.2 points per day from days 2 to 7. The SEM averaged 9.1 during these 7 days. Standard methods yielded a between day MID of 22 points. Regression models of MID projected 11.3-point daily changes. Dividing these estimates of small-but-important-difference by pooled SDs yielded coefficients of .425 for standard MID, .218 for regression model, .177 for SEM, and .157 for ES. These imply per-group sample sizes of 870 using ES, 616 for SEM, 302 for regression model, and 89 for standard MID, assuming alpha = .05, beta = .20 (80% power), and two-tailed testing. Distribution and anchor-based approaches provide somewhat different estimates of small but important difference, which in turn can have substantial impact on trial design.
Health impact assessment in the United States: Has practice followed standards?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuchter, Joseph, E-mail: jws@berkeley.edu; Bhatia, Rajiv; Corburn, Jason
2014-07-01
As an emerging practice, Health Impact Assessment is heterogeneous in purpose, form, and scope and applied in a wide range of decision contexts. This heterogeneity challenges efforts to evaluate the quality and impact of practice. We examined whether information in completed HIA reports reflected objectively-evaluable criteria proposed by the North American HIA Practice Standards Working Group in 2009. From publically-available reports of HIAs conducted in the U.S. and published from 2009 to 2011, we excluded those that were components of, or comment letters on, Environmental Impact Assessments (5) or were demonstration projects or student exercises (8). For the remaining 23more » reports, we used practice standards as a template to abstract data on the steps of HIA, including details on the rationale, authorship, funding, decision and decision-makers, participation, pathways and methods, quality of evidence, and recommendations. Most reports described screening, scoping, and assessment processes, but there was substantial variation in the extent of these processes and the degree of stakeholder participation. Community stakeholders participated in screening or scoping in just two-thirds of the HIAs (16). On average, these HIAs analyzed 5.5 determinants related to 10.6 health impacts. Most HIA reports did not include evaluation or monitoring plans. This study identifies issues for field development and improvement. The standards might be adapted to better account for variability in resources, produce fit-for-purpose HIAs, and facilitate innovation guided by the principles. - Highlights: • Our study examined reported HIAs in the U.S. against published practice standards. • Most HIAs used some screening, scoping and assessment elements from the standards. • The extent of these processes and stakeholder participation varied widely. • The average HIA considered multiple health determinants and impacts. • Evaluation or monitoring plans were generally not included in reports.« less
Lin, Yu-Kai; Wang, Yu-Chun; Lin, Pay-Liam; Li, Ming-Hsu; Ho, Tsung-Jung
2013-09-01
This study aimed to identify optimal cold-temperature indices that are associated with the elevated risks of mortality from, and outpatient visits for all causes and cardiopulmonary diseases during the cold seasons (November to April) from 2000 to 2008 in Northern, Central and Southern Taiwan. Eight cold-temperature indices, average, maximum, and minimum temperatures, and the temperature humidity index, wind chill index, apparent temperature, effective temperature (ET), and net effective temperature and their standardized Z scores were applied to distributed lag non-linear models. Index-specific cumulative 26-day (lag 0-25) mortality risk, cumulative 8-day (lag 0-7) outpatient visit risk, and their 95% confidence intervals were estimated at 1 and 2 standardized deviations below the median temperature, comparing with the Z score of the lowest risks for mortality and outpatient visits. The average temperature was adequate to evaluate the mortality risk from all causes and circulatory diseases. Excess all-cause mortality increased for 17-24% when average temperature was at Z=-1, and for 27-41% at Z=-2 among study areas. The cold-temperature indices were inconsistent in estimating risk of outpatient visits. Average temperature and THI were appropriate indices for measuring risk for all-cause outpatient visits. Relative risk of all-cause outpatient visits increased slightly by 2-7% when average temperature was at Z=-1, but no significant risk at Z=-2. Minimum temperature estimated the strongest risk associated with outpatient visits of respiratory diseases. In conclusion, the relationships between cold temperatures and health varied among study areas, types of health event, and the cold-temperature indices applied. Mortality from all causes and circulatory diseases and outpatient visits of respiratory diseases has a strong association with cold temperatures in the subtropical island, Taiwan. Copyright © 2013 Elsevier B.V. All rights reserved.
Technique for simulating peak-flow hydrographs in Maryland
Dillow, Jonathan J.A.
1998-01-01
The efficient design and management of many bridges, culverts, embankments, and flood-protection structures may require the estimation of time-of-inundation and (or) storage of floodwater relating to such structures. These estimates can be made on the basis of information derived from the peak-flow hydrograph. Average peak-flow hydrographs corresponding to a peak discharge of specific recurrence interval can be simulated for drainage basins having drainage areas less than 500 square miles in Maryland, using a direct technique of known accuracy. The technique uses dimensionless hydrographs in conjunction with estimates of basin lagtime and instantaneous peak flow. Ordinary least-squares regression analysis was used to develop an equation for estimating basin lagtime in Maryland. Drainage area, main channel slope, forest cover, and impervious area were determined to be the significant explanatory variables necessary to estimate average basin lagtime at the 95-percent confidence interval. Qualitative variables included in the equation adequately correct for geographic bias across the State. The average standard error of prediction associated with the equation is approximated as plus or minus (+/-) 37.6 percent. Volume correction factors may be applied to the basin lagtime on the basis of a comparison between actual and estimated hydrograph volumes prior to hydrograph simulation. Three dimensionless hydrographs were developed and tested using data collected during 278 significant rainfall-runoff events at 81 stream-gaging stations distributed throughout Maryland and Delaware. The data represent a range of drainage area sizes and basin conditions. The technique was verified by applying it to the simulation of 20 peak-flow events and comparing actual and simulated hydrograph widths at 50 and 75 percent of the observed peak-flow levels. The events chosen are considered extreme in that the average recurrence interval of the selected peak flows is 130 years. The average standard errors of prediction were +/- 61 and +/- 56 percent at the 50 and 75 percent of peak-flow hydrograph widths, respectively.
Predicting Secchi disk depth from average beam attenuation in a deep, ultra-clear lake
Larson, G.L.; Hoffman, R.L.; Hargreaves, B.R.; Collier, R.W.
2007-01-01
We addressed potential sources of error in estimating the water clarity of mountain lakes by investigating the use of beam transmissometer measurements to estimate Secchi disk depth. The optical properties Secchi disk depth (SD) and beam transmissometer attenuation (BA) were measured in Crater Lake (Crater Lake National Park, Oregon, USA) at a designated sampling station near the maximum depth of the lake. A standard 20 cm black and white disk was used to measure SD. The transmissometer light source had a nearly monochromatic wavelength of 660 nm and a path length of 25 cm. We created a SD prediction model by regression of the inverse SD of 13 measurements recorded on days when environmental conditions were acceptable for disk deployment with BA averaged over the same depth range as the measured SD. The relationship between inverse SD and averaged BA was significant and the average 95% confidence interval for predicted SD relative to the measured SD was ??1.6 m (range = -4.6 to 5.5 m) or ??5.0%. Eleven additional sample dates tested the accuracy of the predictive model. The average 95% confidence interval for these sample dates was ??0.7 m (range = -3.5 to 3.8 m) or ??2.2%. The 1996-2000 time-series means for measured and predicted SD varied by 0.1 m, and the medians varied by 0.5 m. The time-series mean annual measured and predicted SD's also varied little, with intra-annual differences between measured and predicted mean annual SD ranging from -2.1 to 0.1 m. The results demonstrated that this prediction model reliably estimated Secchi disk depths and can be used to significantly expand optical observations in an environment where the conditions for standardized SD deployments are limited. ?? 2007 Springer Science+Business Media B.V.
Stockwell, Tim; Zhao, Jinhui; Sherk, Adam; Callaghan, Russell C; Macdonald, Scott; Gatley, Jodi
2017-07-01
Saskatchewan's introduction in April 2010 of minimum prices graded by alcohol strength led to an average minimum price increase of 9.1% per Canadian standard drink (=13.45 g ethanol). This increase was shown to be associated with reduced consumption and switching to lower alcohol content beverages. Police also informally reported marked reductions in night-time alcohol-related crime. This study aims to assess the impacts of changes to Saskatchewan's minimum alcohol-pricing regulations between 2008 and 2012 on selected crime events often related to alcohol use. Data were obtained from Canada's Uniform Crime Reporting Survey. Auto-regressive integrated moving average time series models were used to test immediate and lagged associations between minimum price increases and rates of night-time and police identified alcohol-related crimes. Controls were included for simultaneous crime rates in the neighbouring province of Alberta, economic variables, linear trend, seasonality and autoregressive and/or moving-average effects. The introduction of increased minimum-alcohol prices was associated with an abrupt decrease in night-time alcohol-related traffic offences for men (-8.0%, P < 0.001), but not women. No significant immediate changes were observed for non-alcohol-related driving offences, disorderly conduct or violence. Significant monthly lagged effects were observed for violent offences (-19.7% at month 4 to -18.2% at month 6), which broadly corresponded to lagged effects in on-premise alcohol sales. Increased minimum alcohol prices may contribute to reductions in alcohol-related traffic-related and violent crimes perpetrated by men. Observed lagged effects for violent incidents may be due to a delay in bars passing on increased prices to their customers, perhaps because of inventory stockpiling. [Stockwell T, Zhao J, Sherk A, Callaghan RC, Macdonald S, Gatley J. Assessing the impacts of Saskatchewan's minimum alcohol pricing regulations on alcohol-related crime. Drug Alcohol Rev 2017;36:492-501]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
Gómez-Cortés, Pilar; Brenna, J Thomas; Sacks, Gavin L
2012-06-19
Optimal accuracy and precision in small-molecule profiling by mass spectrometry generally requires isotopically labeled standards chemically representative of all compounds of interest. However, preparation of mixed standards from commercially available pure compounds is often prohibitively expensive and time-consuming, and many labeled compounds are not available in pure form. We used a single-prototype uniformly labeled [U-(13)C]compound to generate [U-(13)C]-labeled volatile standards for use in subsequent experimental profiling studies. [U-(13)C]-α-Linolenic acid (18:3n-3, ALA) was thermally oxidized to produce labeled lipid degradation volatiles which were subsequently characterized qualitatively and quantitatively. Twenty-five [U-(13)C]-labeled volatiles were identified by headspace solid-phase microextraction-gas chromatography/time-of-flight mass spectrometry (HS-SPME-GC/TOF-MS) by comparison of spectra with unlabeled volatiles. Labeled volatiles were quantified by a reverse isotope dilution procedure. Using the [U-(13)C]-labeled standards, limits of detection comparable to or better than those of previous HS-SPME reports were achieved, 0.010-1.04 ng/g. The performance of the [U-(13)C]-labeled volatile standards was evaluated using a commodity soybean oil (CSO) oxidized at 60 °C from 0 to 15 d. Relative responses of n-decane, an unlabeled internal standard otherwise absent from the mixture, and [U-(13)C]-labeled oxidation products changed by up to 8-fold as the CSO matrix was oxidized, demonstrating that reliance on a single standard in volatile profiling studies yields inaccurate results due to changing matrix effects. The [U-(13)C]-labeled standard mixture was used to quantify 25 volatiles in oxidized CSO and low-ALA soybean oil with an average relative standard deviation of 8.5%. Extension of this approach to other labeled substrates, e.g., [U-(13)C]-labeled sugars and amino acids, for profiling studies should be feasible and can dramatically improve quantitative results compared to use of a single standard.
Zieliński, Tomasz G
2015-04-01
This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.
Adjuvant corneal crosslinking to prevent hyperopic LASIK regression
Aslanides, Ioannis M; Mukherjee, Achyut N
2013-01-01
Purpose To report the long term outcomes, safety, stability, and efficacy in a pilot series of simultaneous hyperopic laser assisted in situ keratomileusis (LASIK) and corneal crosslinking (CXL). Method A small cohort series of five eyes, with clinically suboptimal topography and/or thickness, underwent LASIK surgery with immediate riboflavin application under the flap, followed by UV light irradiation. Postoperative assessment was performed at 1, 3, 6, and 12 months, with late follow up at 4 years, and results were compared with a matched cohort that received LASIK only. Results The average age of the LASIK-CXL group was 39 years (26–46), and the average spherical equivalent hyperopic refractive error was +3.45 diopters (standard deviation 0.76; range 2.5 to 4.5). All eyes maintained refractive stability over the 4 years. There were no complications related to CXL, and topographic and clinical outcomes were as expected for standard LASIK. Conclusion This limited series suggests that simultaneous LASIK and CXL for hyperopia is safe. Outcomes of the small cohort suggest that this technique may be promising for ameliorating hyperopic regression, presumed to be biomechanical in origin, and may also address ectasia risk. PMID:23576861
Monitoring Poisson observations using combined applications of Shewhart and EWMA charts
NASA Astrophysics Data System (ADS)
Abujiya, Mu'azu Ramat
2017-11-01
The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.
Zheng, Xuan; Wu, Ye; Jiang, Jingkun; Zhang, Shaojun; Liu, Huan; Song, Shaojie; Li, Zhenhua; Fan, Xiaoxiao; Fu, Lixin; Hao, Jiming
2015-11-17
Black carbon (BC) emissions from heavy-duty diesel vehicles (HDDVs) are rarely continuously measured using portable emission measurement systems (PEMSs). In this study, we utilize a PEMS to obtain real-world BC emission profiles for 25 HDDVs in China. The average fuel-based BC emissions of HDDVs certified according to Euro II, III, IV, and V standards are 2224 ± 251, 612 ± 740, 453 ± 584, and 152 ± 3 mg kg(-1), respectively. Notably, HDDVs adopting mechanical pump engines had significantly higher BC emissions than those equipped with electronic injection engines. Applying the useful features of PEMSs, we can relate instantaneous BC emissions to driving conditions using an operating mode binning methodology, and the average emission rates for Euro II to Euro IV diesel trucks can be constructed. From a macroscopic perspective, we observe that average speed is a significant factor affecting BC emissions and is well correlated with distance-based emissions (R(2) = 0.71). Therefore, the average fuel-based and distance-based BC emissions on congested roads are 40 and 125% higher than those on freeways. These results should be taken into consideration in future emission inventory studies.
NASA Astrophysics Data System (ADS)
Muthukumar, Palanisamy; Naik, Bukke Kiran; Goswami, Amarendra
2018-02-01
Mechanical draft cross flow cooling towers are generally used in a large-scale water cooled condenser based air-conditioning plants for removing heat from warm water which comes out from the condensing unit. During this process considerable amount of water in the form of drift (droplets) and evaporation is carried away along with the circulated air. In this paper, the performance evaluation of a standard cross flow induced draft cooling tower in terms of water loss, range, approach and cooling tower efficiency are presented. Extensive experimental studies have been carried out in three cooling towers employed in a water cooled condenser based 1200 TR A/C plant over a period of time. Daily variation of average water loss and cooling tower performance parameters have been reported for some selected days. The reported average water loss from three cooling towers is 4080 l/h and the estimated average water loss per TR per h is about 3.1 l at an average relative humidity (RH) of 83%. The water loss during peak hours (2 pm) is about 3.4 l/h-TR corresponding to 88% of RH and the corresponding efficiency of cooling towers varied between 25% and 45%.
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna
2018-06-05
Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.
Glennen, Sharon
2015-01-01
This study aimed to determine the relative strengths and weaknesses in language and verbal short-term memory abilities of school-age children who were adopted from Eastern Europe. Children adopted between 1;0 and 4;11 (years;months) of age were assessed with the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Clinical Evaluation of Language Fundamentals, Fourth Edition (CELF-4) at age 5 and ages 6-7. Language composites and subtests were compared across time. All CELF-P2 and CELF-4 mean scores fell in the average range. Receptive composites were 102.74 and 103.86, and expressive composites were 100.58 and 98.42, at age 5 and ages 6-7, respectively. Age of adoption did not correlate to test scores. At ages 6-7, receptive language, sentence formulation, and vocabulary were areas of strength, with subtest scores significantly better than test norms. Verbal short-term memory and expressive grammar subtest scores were within the average range but significantly worse than test norms. A high percentage of children scored 1 standard deviation below the mean on these 2 subtests (27.3%-34.1%). Eastern European adoptees had average scores on a variety of language tests. Vocabulary was a relative strength; enriching the environment substantially improved this language area. Verbal short-term memory and expressive grammar were relative weaknesses. Children learning a language later in life may have difficulty with verbal short-term memory, which leads to weaknesses in expressive syntax and grammar.
Becker, R; Lô, I; Sporkert, F; Baumgartner, M
2018-07-01
The increasing request for hair ethyl glucuronide (HEtG) in alcohol consumption monitoring according to cut-off levels set by the Society of Hair Testing (SoHT) has triggered a proficiency testing program based on interlaboratory comparisons (ILC). Here, the outcome of nine consecutive ILC rounds organised by the SoHT on the determination of HEtG between 2011 and 2017 is summarised regarding interlaboratory reproducibility and the influence of procedural variants. Test samples prepared from cut hair (1mm) with authentic (in-vivo incorporated) and soaked (in-vitro incorporated) HEtG concentrations up to 80pg/mg were provided for 27-35 participating laboratories. Laboratory results were evaluated according to ISO 5725-5 and provided robust averages and relative reproducibility standard deviations typically between 20 and 35% in reasonable accordance with the prediction of the Horwitz model. Evaluation of results regarding the analytical techniques revealed no significant differences between gas and liquid chromatographic methods In contrast, a detailed evaluation of different sample preparations revealed significantly higher average values in case when pulverised hair is tested compared to cut hair. This observation was reinforced over the different ILC rounds and can be attributed to the increased acceptance and routine of hair pulverisation among laboratories. Further, the reproducibility standard deviations among laboratories performing pulverisation were on average in very good agreement with the prediction of the Horwitz model. Use of sonication showed no effect on the HEtG extraction yield. Copyright © 2018 Elsevier B.V. All rights reserved.
Alternative Fuels Data Center: Low Rolling Resistance Tires
meet their Corporate Average Fuel Economy (CAFE) standards. However, no requirements are currently Rolling Resistance Part 1: Understanding Corporate Average Fuel Economy Definitions 1015 Driving Cycle A
Whole-body kinematic and dynamic response of restrained PMHS in frontal sled tests.
Forman, Jason; Lessley, David; Kent, Richard; Bostrom, Ola; Pipkorn, Bengt
2006-11-01
The literature contains a wide range of response data describing the biomechanics of isolated body regions. Current data for the validation of frontal anthropomorphic test devices and human body computational models lack, however, a detailed description of the whole-body response to loading with contemporary restraints in automobile crashes. This study presents data from 14 frontal sled tests describing the physical response of postmortem human surrogates (PMHS) in the following frontal crash environments: A) (5 tests) driver position, force-limited 3-point belt plus airbag restraint (FLB+AB), 48 km/h deltaV. B) (3 tests) passenger position, FLB+AB restraint, 48 km/h deltaV. C) (3 tests) passenger position, standard (not force-limited) 3-point belt plus air bag restraint (SB+AB), 48 km/h deltaV. D) (3 tests) passenger position, standard 3-point belt restraint (SB), 29 km/h deltaV. Reported data include x-axis and z-axis (SAE occupant reference frame) accelerations of the head, spine (upper, middle, and lower), and pelvis; rate of angular rotation of the head about y-axis; displacements of the head, upper spine, pelvis and knee relative to the vehicle buck; and deformation contours of the upper and lower chest. A variety of kinematic trends are identified across the different test conditions, including a decrease in head and thorax excursion and a change in the nature of the excursion in the driver position compared to the passenger position. Despite this increase in forward excursion when compared to the driver's side FLB+AB tests, the passenger's side FLB+AB tests resulted in greater peak thoracic (T8) x-axis accelerations (passenger's side -29 g; driver's side -22 g;) and comparable maximum chest deflection (passenger's side - 23+/-3.1% of the undeformed chest depth; driver's side - 23+/-5.6%; ). In the 48 km/h passenger's side tests, the head excursion associated with the force-limiting belt system was approximately 15% greater than that for a standard belt system in tests that were otherwise identical. This was accompanied by a decrease in chest deflection of approximately 20% with the force-limiting system. Despite the decrease in test speed, the 29 km/h passenger's side tests with standard (not force-limiting) 3-point belt restraints resulted in maximum chest deflection (16+/-5.6% average) comparable to that observed in the 48 km/h, FLB+AB, driver's side tests (21+/-3.1% average). Finally, forward head excursion was slightly higher in the 29 km/h passenger's side tests (33+/-1.1 cm average) than in the 48 km/h driver's side tests (27+/-3.7 cm average), and was lower than that in the 48 km/h FLB+AB (58+/-4.4 cm average) and SB+AB (46+/-2.1 cm average) passenger's side tests.
Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels
Laenen, Antonius; Curtis, R. E.
1989-01-01
Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)
Meng, Jie; Zhu, Lijing; Zhu, Li; Wang, Huanhuan; Liu, Song; Yan, Jing; Liu, Baorui; Guan, Yue; Ge, Yun; He, Jian; Zhou, Zhengyang; Yang, Xiaofeng
2016-10-22
To explore the role of apparent diffusion coefficient (ADC) histogram shape related parameters in early assessment of treatment response during the concurrent chemo-radiotherapy (CCRT) course of advanced cervical cancers. This prospective study was approved by the local ethics committee and informed consent was obtained from all patients. Thirty-two patients with advanced cervical squamous cell carcinomas underwent diffusion weighted magnetic resonance imaging (b values, 0 and 800 s/mm 2 ) before CCRT, at the end of 2nd and 4th week during CCRT and immediately after CCRT completion. Whole lesion ADC histogram analysis generated several histogram shape related parameters including skewness, kurtosis, s-sD av , width, standard deviation, as well as first-order entropy and second-order entropies. The averaged ADC histograms of 32 patients were generated to visually observe dynamic changes of the histogram shape following CCRT. All parameters except width and standard deviation showed significant changes during CCRT (all P < 0.05), and their variation trends fell into four different patterns. Skewness and kurtosis both showed high early decline rate (43.10 %, 48.29 %) at the end of 2nd week of CCRT. All entropies kept decreasing significantly since 2 weeks after CCRT initiated. The shape of averaged ADC histogram also changed obviously following CCRT. ADC histogram shape analysis held the potential in monitoring early tumor response in patients with advanced cervical cancers undergoing CCRT.
Change in area of geographic atrophy in the Age-Related Eye Disease Study: AREDS report number 26.
Lindblad, Anne S; Lloyd, Patricia C; Clemons, Traci E; Gensler, Gary R; Ferris, Frederick L; Klein, Michael L; Armstrong, Jane R
2009-09-01
To characterize progression of geographic atrophy (GA) associated with age-related macular degeneration in AREDS as measured by digitized fundus photographs. Fundus photographs from 181 of 4757 AREDS participants with a GA area of at least 0.5 disc areas at baseline or from participants who developed bilateral GA during follow-up were scanned, digitized, and evaluated longitudinally. Geographic atrophy area was determined using planimetry. Rates of progression from noncentral to central GA and of vision loss following development of central GA included the entire AREDS cohort. Median initial lesion size was 4.3 mm(2). Average change in digital area of GA from baseline was 2.03 mm(2) (standard error of the mean, 0.24 mm(2)) at 1 year, 3.78 mm(2) (0.24 mm(2)) at 2 years, 5.93 mm(2) (0.34 mm(2)) at 3 years, and 1.78 mm(2) (0.086 mm(2)) per year overall. Median time to developing central GA after any GA diagnosis was 2.5 years (95% confidence interval, 2.0-3.0). Average visual acuity decreased by 3.7 letters at first documentation of central GA, and by 22 letters at year 5. Growth of GA area can be reliably measured using standard fundus photographs that are digitized and subsequently graded at a reading center. Development of GA is associated with subsequent further growth of GA, development of central GA, and loss in central vision.
Investigating adsorption/desorption of carbon dioxide in aluminum compressed gas cylinders.
Miller, Walter R; Rhoderick, George C; Guenther, Franklin R
2015-02-03
Between June 2010 and June 2011, the National Institute of Standards and Technology (NIST) gravimetrically prepared a suite of 20 carbon dioxide (CO2) in air primary standard mixtures (PSMs). Ambient mole fraction levels were obtained through six levels of dilution beginning with pure (99.999%) CO2. The sixth level covered the ambient range from 355 to 404 μmol/mol. This level will be used to certify cylinder mixtures of compressed dry whole air from both the northern and southern hemispheres as NIST standard reference materials (SRMs). The first five levels of PSMs were verified against existing PSMs in a balance of air or nitrogen with excellent agreement observed (the average percent difference between the calculated and analyzed values was 0.002%). After the preparation of a new suite of PSMs at ambient level, they were compared to an existing suite of PSMs. It was observed that the analyzed concentration of the new PSMs was less than the calculated gravimetric concentration by as much as 0.3% relative. The existing PSMs had been used in a Consultative Committee for Amount of Substance-Metrology in Chemistry Key Comparison (K-52) in which there was excellent agreement (the NIST-analyzed value was -0.09% different from the calculated value, while the average of the difference for all 18 participants was -0.10%) with those of other National Metrology Institutes and World Meteorological Organization designated laboratories. In order to determine the magnitude of these losses at the ambient level, a series of "daughter/mother" tests were initiated and conducted in which the gas mixture containing CO2 from a "mother" cylinder was transferred into an evacuated "daughter" cylinder. These cylinder pairs were then compared using cavity ring-down spectroscopy under high reproducibility conditions (the average percent relative standard deviation of sample response was 0.02). A ratio of the daughter instrument response to the mother response was calculated, with the resultant deviation from unity being a measure of the CO2 loss or gain. Cylinders from three specialty gas vendors were tested to find the appropriate cylinder in which to prepare the new PSMs. All cylinders tested showed a loss of CO2, presumably to the walls of the cylinder. The vendor cylinders exhibiting the least loss of CO2 were then purchased to be used to gravimetrically prepare the PSMs, adjusting the calculated mole fraction for the loss bias and an uncertainty calculated from this work.
Determination of patulin in apple juice by liquid chromatography: collaborative study.
Brause, A R; Trucksess, M W; Thomas, F S; Page, S W
1996-01-01
An AOAC International-International Union of Pure and Applied Chemistry-International Fruit Juice Union (AOAC-IUPAC-IFJU) collaborative study was conducted to evaluate a liquid chromatographic (LC) procedure for determination of patulin in apple juice. Patulin is a mold metabolite found naturally in rotting apples. Patulin is extracted with ethyl acetate, treated with sodium carbonate solution, and determined by reversed-phase LC with UV detection at 254 or 276 nm. Water, water-tetrahydrofuran, or water-acetonitrile was used as mobile phase. Levels determined in spiked test samples were 20, 50, 100, and 200 micrograms/L. A test sample naturally contaminated at 31 micrograms/L was also included. Twenty-two collaborators in 10 countries analyzed 12 test samples of apple juice. Recoveries averaged 96%, with a range of 91-108%. Repeatability relative standard deviations (RSDr) ranged from 10.9 to 53.8%. The reproducibility relative standard deviation (RSDR) ranged from 15.1 to 68.8%. The LC method for determination of patulin in apple juice has been adopted first action by AOAC INTERNATIONAL.
NASA Astrophysics Data System (ADS)
Bradbury-Bailey, Mary
With the implementation of No Child Left Behind came a wave of educational reform intended for those working with student populations whose academic performance seemed to indicate an alienation from the educational process. Central to these reforms was the implementation of standards-based instruction and their accompanying standardized assessments; however, in one area reform seemed nonexistent---the teacher's gradebook. (Erickson, 2010, Marzano, 2006; Scriffiny, 2008). Given the link between the grading process and achievement motivation, Ames (1992) suggested the use of practices that promote mastery goal orientation. The purpose of this study was to examine the impact of standards-based grading system as a factor contributing to mastery goal orientation on the academic performance of urban African American students. To determine the degree of impact, this study first compared the course content averages and End-of-Course-Test (EOCT) scores for science classes using a traditional grading system to those using a standards-based grading system by employing an Analysis of Covariance (ANCOVA). While there was an increase in all grading areas, two showed a significant difference---the Physical Science course content average (p = 0.024) and ix the Biology EOCT scores (p = 0.0876). These gains suggest that standards-based grading can have a positive impact on the academic performance of African American students. Secondly, this study examined the correlation between the course content averages and the EOCT scores for both the traditional and standards-based grading system; for both Physical Science and Biology, there was a stronger correlation between these two scores for the standards-based grading system.
Outlook for Children with Intellectual Disabilities
... intellectually disabled (formerly called mentally retarded). Their general intelligence is significantly below average, and they have difficulty ... As measured by standardized tests, the average IQ (intelligence quotient) is 100; normal ranges from 90 to ...
40 CFR 421.265 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... day Maximum for monthly average mg/troy ounce of precious metals, including silver, incinerated or... Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg/troy ounce of precious... Maximum for any 1 day Maximum for monthly average mg/troy ounce of gold produced by cyanide stripping...
Investigating DRG cost weights for hospitals in middle income countries.
Ghaffari, Shahram; Doran, Christopher; Wilson, Andrew; Aisbett, Chris; Jackson, Terri
2009-01-01
Identifying the cost of hospital outputs, particularly acute inpatients measured by Diagnosis Related Groups (DRGs), is an important component of casemix implementation. Measuring the relative costliness of specific DRGs is useful for a wide range of policy and planning applications. Estimating the relative use of resources per DRG can be done through different costing approaches depending on availability of information and time and budget. This study aims to guide costing efforts in Iran and other countries in the region that are pursuing casemix funding, through identifying the main issues facing cost finding approaches and introducing the costing models compatible with their hospitals accounting and management structures. The results show that inadequate financial and utilisation information at the patient's level, poorly computerized 'feeder systems'; and low quality data make it impossible to estimate reliable DRGs costs through clinical costing. A cost modelling approach estimates the average cost of 2.723 million Rials (Iranian Currency) per DRG. Using standard linear regression, a coefficient of 0.14 (CI = 0.12-0.16) suggests that the average cost weight increases by 14% for every one-day increase in average length of stay (LOS).We concluded that calculation of DRG cost weights (CWs) using Australian service weights provides a sensible starting place for DRG-based hospital management; but restructuring hospital accounting systems, designing computerized feeder systems, using appropriate software, and development of national service weights that reflect local practice patterns will enhance the accuracy of DRG CWs.
Remedios, Cheryl; Willenberg, Lisa; Zordan, Rachel; Murphy, Andrea; Hessel, Gail; Philip, Jennifer
2015-03-01
Respite services are recommended as an important support for caregivers of children with life-threatening conditions. However, the benefits of respite have not been convincingly demonstrated through quantitative research. To determine the impact of out-of home respite care on levels of fatigue, psychological adjustment, quality of life and relationship satisfaction among caregivers of children with life-threatening conditions. A mixed-methods, pre-test and post-test study A consecutive sample of 58 parental caregivers whose children were admitted to a children's hospice for out-of-home respite over an average of 4 days. Caregivers had below-standard levels of quality of life compared to normative populations. Paired t-tests demonstrated that caregivers' average psychological adjustment scores significantly improved from pre-respite (mean = 13.9, standard error = 0.71) to post-respite (mean = 10.7, standard error = 1); p < 0.001, 95% confidence interval: 1.25-5.11). Furthermore, caregivers' average fatigue scores significantly improved from pre-respite (mean = 14.3, standard error = 0.85) to post-respite (mean = 10.9, standard error = 1.01; p < 0.001, 95% confidence interval: 1.69-7.94), and caregivers' average mental health quality of life scores significantly improved from pre-respite (mean = 44.2, standard error = 1.8) to post-respite (mean = 49.1, standard error = 1.6; p < 0.01, 95% confidence interval: -9.56 to 0.36). Qualitative data showed caregivers sought respite for relief from intensive care provision and believed this was essential to their well-being. Findings indicate the effectiveness of out-of-home respite care in improving the fatigue and psychological adjustment of caregivers of children with life-threatening conditions. Study outcomes inform service provision and future research efforts in paediatric palliative care. © The Author(s) 2015.
Determining the Equation of State (EoS) Parameters for Ballistic Gelatin
2015-09-01
standard deviation. The specific heat measured at room temperature reported in (Winter 1975) is approximately 1.13 cal/g/°C (= 4.73 J /g/K). Fig. 4...Piatt 2010) Table 3 Specific heat capacity, average heat capacity, and standard deviation Temperature (°C) Cp [ J /(g·K)] Cp Cp Cp Average Cp...density amorphous ice and their implications on pressure induced amorphization. J Chem Physics. 2005;122:124710. Appleby-Thomas GJ, Hazell PJ
Roberts, Steven; Martin, Michael A
2010-01-01
Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.
Assessing the Genetics Content in the Next Generation Science Standards
Lontok, Katherine S.; Zhang, Hubert; Dougherty, Michael J.
2015-01-01
Science standards have a long history in the United States and currently form the backbone of efforts to improve primary and secondary education in science, technology, engineering, and math (STEM). Although there has been much political controversy over the influence of standards on teacher autonomy and student performance, little light has been shed on how well standards cover science content. We assessed the coverage of genetics content in the Next Generation Science Standards (NGSS) using a consensus list of American Society of Human Genetics (ASHG) core concepts. We also compared the NGSS against state science standards. Our goals were to assess the potential of the new standards to support genetic literacy and to determine if they improve the coverage of genetics concepts relative to state standards. We found that expert reviewers cannot identify ASHG core concepts within the new standards with high reliability, suggesting that the scope of content addressed by the standards may be inconsistently interpreted. Given results that indicate that the disciplinary core ideas (DCIs) included in the NGSS documents produced by Achieve, Inc. clarify the content covered by the standards statements themselves, we recommend that the NGSS standards statements always be viewed alongside their supporting disciplinary core ideas. In addition, gaps exist in the coverage of essential genetics concepts, most worryingly concepts dealing with patterns of inheritance, both Mendelian and complex. Finally, state standards vary widely in their coverage of genetics concepts when compared with the NGSS. On average, however, the NGSS support genetic literacy better than extant state standards. PMID:26222583
Wilson, S.A.; Ridley, W.I.; Koenig, A.E.
2002-01-01
The requirements of standard materials for LA-ICP-MS analysis have been difficult to meet for the determination of trace elements in sulfides. We describe a method for the production of synthetic sulfides by precipitation from solution. The method is detailed by the production of approximately 200 g of a material, PS-1, with a suite of chalcophilic trace elements in an Fe-Zn-Cu-S matrix. Preliminary composition data, together with an evaluation of the homogeneity for individual elements, suggests that this type of material meets the requirements for a sulfide calibration standard that allows for quantitative analysis. Contamination of the standard with Na suggests that H2S gas may prove a better sulfur source for future experiments. We recommend that calibration data be collected in whatever mode is closest to that employed for the analysis of the unknown material, because of variable fractionation effects as a function of analytical mode. For instance, if individual spot analyses are attempted on unknown sample, then a raster of several individual spot analyses, not a continuous scan, should be collected and averaged for the standard. Hg and Au are exceptions to the above and calibration data should always be collected in a scanning mode. Au is more heterogeneously distributed than other trace metals and large-area scans are required to provide an average value for calibration purposes. We emphasize that the values given in Table 1 are preliminary values. Further chemical characterization of this standard, through a round-robin analysis program, will allow the USGS to provide both certified and recommended values for individual elements. The USGS has developed PS-1 as a potential new LA-ICP-MS standard for use by the analytical community, and requests for this material should be addressed to S. Wilson. However, it is stressed that an important aspect of the method described here is the flexibility for individual investigators to produce sulfides with a wide range of trace metals in variable matrices. For example, PS-1 is not well suited to the analysis of galena, and it would be relatively straightforward for other standards to be developed with Pb present in the matrix as a major constituent. These standards can be made easily and cheaply in a standard wet chemistry laboratory using equipment and chemicals that are readily available.
The difference engine: a model of diversity in speeded cognition.
Myerson, Joel; Hale, Sandra; Zheng, Yingye; Jenkins, Lisa; Widaman, Keith F
2003-06-01
A theory of diversity in speeded cognition, the difference engine, is proposed, in which information processing is represented as a series of generic computational steps. Some individuals tend to perform all of these computations relatively quickly and other individuals tend to perform them all relatively slowly, reflecting the existence of a general cognitive speed factor, but the time required for response selection and execution is assumed to be independent of cognitive speed. The difference engine correctly predicts the positively accelerated form of the relation between diversity of performance, as measured by the standard deviation for the group, and task difficulty, as indexed by the mean response time (RT) for the group. In addition, the difference engine correctly predicts approximately linear relations between the RTs of any individual and average performance for the group, with the regression lines for fast individuals having slopes less than 1.0 (and positive intercepts) and the regression lines for slow individuals having slopes greater than 1.0 (and negative intercepts). Similar predictions are made for comparisons of slow, average, and fast subgroups, regardless of whether those subgroups are formed on the basis of differences in ability, age, or health status. These predictions are consistent with evidence from studies of healthy young and older adults as well as from studies of depressed and age-matched control groups.
Carbon monoxide exposure from aircraft fueling vehicles.
McCammon, C S; Halperin, W F; Lemen, R A
1981-01-01
Investigators from the National Institute for Occupational Safety and Health observed deficiencies in maintenance of fueling trucks at an international airport. The exhaust system is vented under the front bumper, a standard design on fueling trucks which is intended to minimize the proximity of the exhaust system to the jet fuel in the vehicles. Carbon monoxide levels were measured in the cabs of 17 fueling trucks with windows closed, heaters on, and in different positions relative to the wind. One truck had an average CO level of 300 ppm, two exceeded 100 ppm, five others exceeded 50 ppm, while levels in the other nine averaged less than or equal to 500 ppm. Levels of CO depended on the mechanical condition of the vehicle and the vehicle's orientation to the wind. Stringent maintenance is required as the exhaust design is not fail-safe.
The Family Health Project: psychosocial adjustment of children whose mothers are HIV infected.
Forehand, R; Steele, R; Armistead, L; Morse, E; Simon, P; Clark, L
1998-06-01
The psychosocial adjustment of 87 inner-city African American children 6-11 years old whose mothers were HIV infected was compared with that of 149 children from a similar sociodemographic background whose mothers did not report being HIV infected. Children were not identified as being HIV infected. Mother reports, child reports, and standardized reading achievement scores were used to assess 4 domains of adjustment: externalizing problems, internalizing problems, cognitive competence, and prosocial competence. The results indicated that, on average, children from both groups had elevated levels of behavior problem scores and low reading achievement scores when compared with national averages. Relative to children whose mothers were not infected, those whose mothers were HIV infected were reported to have more difficulties in all domains of psychosocial adjustment. Potential family processes that may explain the findings are discussed.
NASA Astrophysics Data System (ADS)
Juhari, Nurjuliana; Menon, P. Susthitha; Ehsan, Abang Annuar; Shaari, Sahbudin
2015-01-01
Arrayed Waveguide Grating (AWG) functioning as a demultiplexer is designed on SOI platform with rib waveguide structure to be utilized in coarse wavelength division multiplexing-passive optical network (CWDM-PON) systems. Two design approaches; conventional and tapered configuration of AWG was developed with channel spacing of 20 nm that covers the standard transmission spectrum of CWDM ranging from 1311 nm to 1611 nm. The performance of insertion loss for tapered configuration offered the lowest insertion loss of 0.77 dB but the adjacent crosstalk gave non-significant relation for both designs. With average channel spacing of 20.4 nm, the nominal central wavelength of this design is close to the standard CWDM wavelength grid over 484 nm free spectrum range (FSR).
Compressive auto-indexing in femtosecond nanocrystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maia, Filipe; Yang, Chao; Marchesini, Stefano
2010-09-20
Ultrafast nanocrystallography has the potential to revolutionize biology by enabling structural elucidation of proteins for which it is possible to grow crystals with 10 or fewer unit cells. The success of nanocrystallography depends on robust orientation-determination procedures that allow us to average diffraction data from multiple nanocrystals to produce a 3D diffraction data volume with a high signal-to-noise ratio. Such a 3D diffraction volume can then be phased using standard crystallographic techniques."Indexing" algorithms used in crystallography enable orientation determination of a diffraction data from a single crystal when a relatively large number of reflections are recorded. Here we show thatmore » it is possible to obtain the exact lattice geometry from a smaller number of measurements than standard approaches using a basis pursuit solver.« less
Aging and the discrimination of object weight.
Norman, J Farley; Norman, Hideko F; Swindle, Jessica M; Jennings, L RaShae; Bartholomew, Ashley N
2009-01-01
A single experiment was carried out to evaluate the ability of younger and older observers to discriminate object weights. A 2-alternative forced-choice variant of the method of constant stimuli was used to obtain difference thresholds for lifted weight for twelve younger (mean age = 21.5 years) and twelve older (mean age = 71.3 years) adults. The standard weight was 100 g, whereas the test weights ranged from 85 to 115 g. The difference thresholds of the older observers were 57.6% higher than those of the younger observers: the average difference thresholds were 10.4% and 6.6% of the standard for the older and younger observers, respectively. The current findings of an age-related deterioration in the ability to discriminate lifted weight extend and disambiguate the results of earlier research.
Jones, Bruce H; Hauret, Keith G; Dye, Shamola K; Hauschild, Veronique D; Rossi, Stephen P; Richardson, Melissa D; Friedl, Karl E
2017-11-01
To determine the combined effects of physical fitness and body composition on risk of training-related musculoskeletal injuries among Army trainees. Retrospective cohort study. Rosters of soldiers entering Army basic combat training (BCT) from 2010 to 2012 were linked with data from multiple sources for age, sex, physical fitness (heights, weights (mass), body mass index (BMI), 2 mile run times, push-ups), and medical injury diagnoses. Analyses included descriptive means and standard deviations, comparative t-tests, risks of injury, and relative risks (RR) and 95% confidence intervals (CI). Fitness and BMI were divided into quintiles (groups of 20%) and stratified for chi-square (χ 2 ) comparisons and to determine trends. Data were obtained for 143,398 men and 41,727 women. As run times became slower, injury risks increased steadily (men=9.8-24.3%, women=26.5-56.0%; χ 2 trends (p<0.00001)). For both genders, the relationship of BMI to injury risk was bimodal, with the lowest risk in the average BMI group (middle quintile). Injury risks were highest in the slowest groups with lowest BMIs (male trainees=26.5%; female trainees=63.1%). Compared to lowest risk group (average BMI with fastest run-times), RRs were significant (male trainees=8.5%; RR 3.1, CI: 2.8-3.4; female trainees=24.6%; RR 2.6, CI: 2.3-2.8). Trainees with the lowest BMIs exhibited highest injury risks for both genders and across all fitness levels. While the most aerobically fit Army trainees experience lower risk of training-related injury, at any given aerobic fitness level those with the lowest BMIs are at highest risk. This has implications for recruitment and retention fitness standards. Copyright © 2017. Published by Elsevier Ltd.
Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping
2018-02-01
An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Quantifying cause-related mortality by weighting multiple causes of death
Moreno-Betancur, Margarita; Lamarche-Vadel, Agathe; Rey, Grégoire
2016-01-01
Abstract Objective To investigate a new approach to calculating cause-related standardized mortality rates that involves assigning weights to each cause of death reported on death certificates. Methods We derived cause-related standardized mortality rates from death certificate data for France in 2010 using: (i) the classic method, which considered only the underlying cause of death; and (ii) three novel multiple-cause-of-death weighting methods, which assigned weights to multiple causes of death mentioned on death certificates: the first two multiple-cause-of-death methods assigned non-zero weights to all causes mentioned and the third assigned non-zero weights to only the underlying cause and other contributing causes that were not part of the main morbid process. As the sum of the weights for each death certificate was 1, each death had an equal influence on mortality estimates and the total number of deaths was unchanged. Mortality rates derived using the different methods were compared. Findings On average, 3.4 causes per death were listed on each certificate. The standardized mortality rate calculated using the third multiple-cause-of-death weighting method was more than 20% higher than that calculated using the classic method for five disease categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases. Moreover, this method highlighted the mortality burden associated with certain diseases in specific age groups. Conclusion A multiple-cause-of-death weighting approach to calculating cause-related standardized mortality rates from death certificate data identified conditions that contributed more to mortality than indicated by the classic method. This new approach holds promise for identifying underrecognized contributors to mortality. PMID:27994280
Environmental monitoring at Mound: 1986 report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carfagno, D.G.; Farmer, B.M.
1987-05-11
The local environment around Mound was monitored for tritium and plutonium-238. The results are reported for 1986. Environmental media analyzed included air, water, vegetation, foodstuffs, and sediment. The average concentrations of plutonium-238 and tritium were within the DOE interim air and water Derived Concentration Guides (DCG) for these radionuclides. The average incremental concentrations of plutonium-238 and tritium oxide in air measured at all offsite locations during 1986 were 0.03% and 0.01%, respectively, of the DOE DCGs for uncontrolled areas. The average incremental concentration of plutonium-238 measured at all locations in the Great Miami River during 1986 was 0.0005% of themore » DOE DCG. The average incremental concentration of tritium measured at all locations in the Great Miami River during 1986 was 0.005% of the DOE DCG. The average incremental concentrations of plutonium-238 found during 1986 in surface and area drinking water were less than 0.00006% of the DOE DCG. The average incremental concentration of tritium in surface water was less than 0.005% of the DOE DCG. All tritium in drinking water data is compared to the US EPA Drinking Water Standard. The average concentrations in local private and municipal drinking water systems were less than 25% and 1.5%, respectively. Although no DOE DCG is available for foodstuffs, the average concentrations are a small fraction of the water DCG (0.04%). The concentrations of sediment samples obtained at offsite surface water sampling locations were extremely low and therefore represent no adverse impact to the environment. The dose equivalent estimates for the average air, water, and foodstuff concentrations indicate that the levels are within 1% of the DOE standard of 100 mrem. None of these exceptions, however, had an adverse impact on the water quality of the Great Miami River or caused the river to exceed Ohio Stream Standards. 20 refs., 5 figs., 31 tabs.« less
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Barnes, Robert A.; Eplee, Robert E., Jr.; Biggar, Stuart F.; Thome, Kurtis J.; Zalewski, Edward F.; Slater, Philip N.; Holmes, Alan W.
1999-01-01
The solar radiation-based calibration (SRBC) of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) was performed on 1 November 1993. Measurements were made outdoors in the courtyard of the instrument manufacturer. SeaWiFS viewed the solar irradiance reflected from the sensor's diffuser in the same manner as viewed on orbit. The calibration included measurements using a solar radiometer designed to determine the transmittances of principal atmospheric constituents. The primary uncertainties in the outdoor measurements are the transmission of the atmosphere and the reflectance of the diffuser. Their combined uncertainty is about 5 or 6%. The SRBC also requires knowledge of the extraterrestrial solar spectrum. Four solar models are used. When averaged over the responses of the SeaWiFS bands, the irradiance models agree at the 3.6% level, with the greatest difference for SeaWiFS band 8. The calibration coefficients from the SRBC are lower than those from the laboratory calibration of the instrument in 1997. For a representative solar model, the ratios of the SRBC coefficients to laboratory values average 0.962 with a standard deviation of 0.012. The greatest relative difference is 0.946 for band 8. These values are within the estimated uncertainties of the calibration measurements. For the transfer-to-orbit experiment, the measurements in the manufacturer's courtyard are used to predict the digital counts from the instrument on its first day on orbit (August 1, 1997). This experiment requires an estimate of the relative change in the diffuser response for the period between the launch of the instrument and its first solar measurements on orbit (September 9, 1997). In relative terms, the counts from the instrument on its first day on orbit averaged 1.3% higher than predicted, with a standard deviation of 1.2% and a greatest difference of 2.4% or band 7. The estimated uncertainty for the transfer-to-orbit experiment is about 3 or 4%.
NASA Astrophysics Data System (ADS)
Wu, Q.
2013-12-01
The MM5-SMOKE-CMAQ model system, which is developed by the United States Environmental Protection Agency(U.S. EPA) as the Models-3 system, has been used for the daily air quality forecast in the Beijing Municipal Environmental Monitoring Center(Beijing MEMC), as a part of the Ensemble Air Quality Forecast System for Beijing(EMS-Beijing) since the Olympic Games year 2008. In this study, we collect the daily forecast results of the CMAQ model in the whole year 2010 for the model evaluation. The results show that the model play a good model performance in most days but underestimate obviously in some air pollution episode. A typical air pollution episode from 11st - 20th January 2010 was chosen, which the air pollution index(API) of particulate matter (PM10) observed by Beijing MEMC reaches to 180 while the prediction of PM10-API is about 100. Taking in account all stations in Beijing, including urban and suburban stations, three numerical methods are used for model improvement: firstly, enhance the inner domain with 4km grids, the coverage from only Beijing to the area including its surrounding cities; secondly, update the Beijing stationary area emission inventory, from statistical county-level to village-town level, that would provide more detail spatial informance for area emissions; thirdly, add some industrial points emission in Beijing's surrounding cities, the latter two are both the improvement of emission. As the result, the peak of the nine national standard stations averaged PM10-API, which is simulated by CMAQ as daily hindcast PM10-API, reach to 160 and much near to the observation. The new results show better model performance, which the correlation coefficent is 0.93 in national standard stations average and 0.84 in all stations, the relative error is 15.7% in national standard stations averaged and 27% in all stations. The time series of 9 national standard in Beijing urban The scatter diagram of all stations in Beijing, the red is the forecast and the blue is new result.
DOT National Transportation Integrated Search
2010-05-07
Final Rule to establish a National Program consisting of new standards for light-duty vehicles that will reduce greenhouse gas emissions and improve fuel economy. This joint : Final Rule is consistent with the National Fuel Efficiency Policy announce...
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Evans, D. G.; Perkins, F. T.
1963-01-01
The Fifth International Standard Gas-Gangrene Antitoxin (Perfringens) (Clostridium welchii Type A Antitoxin) was prepared from serum from immunized horses. It was freeze-dried in ampoules each containing 1 ml. Seven laboratories collaborated in assaying its potency in terms of the Fourth International Standard by the intravenous inoculation of mice. The geometric mean value, taking the results of all the laboratories, was 270 International Units per ampoule and the maximum variation between laboratories was 15%. In vitro (lecithinase) tests were also done by three laboratories, giving an average of 261 International Units per ampoule. The dry weight contents of ampoules, determined in three laboratories, varied by less than 3%, with an average of 90.35 mg per ampoule. The standard was stable for 120 hours at 56°C. Each ampoule of the Fifth International Standard for Gas-Gangrene Antitoxin (Perfringens) contains 270 International Units, and one International Unit is contained in 0.3346 mg of the International Standard. PMID:14107745
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Is poker a game of skill or chance? A quasi-experimental study.
Meyer, Gerhard; von Meduna, Marc; Brosowski, Tim; Hayer, Tobias
2013-09-01
Due to intensive marketing and the rapid growth of online gambling, poker currently enjoys great popularity among large sections of the population. Although poker is legally a game of chance in most countries, some (particularly operators of private poker web sites) argue that it should be regarded as a game of skill or sport because the outcome of the game primarily depends on individual aptitude and skill. The available findings indicate that skill plays a meaningful role; however, serious methodological weaknesses and the absence of reliable information regarding the relative importance of chance and skill considerably limit the validity of extant research. Adopting a quasi-experimental approach, the present study examined the extent to which the influence of poker playing skill was more important than card distribution. Three average players and three experts sat down at a six-player table and played 60 computer-based hands of the poker variant "Texas Hold'em" for money. In each hand, one of the average players and one expert received (a) better-than-average cards (winner's box), (b) average cards (neutral box) and (c) worse-than-average cards (loser's box). The standardized manipulation of the card distribution controlled the factor of chance to determine differences in performance between the average and expert groups. Overall, 150 individuals participated in a "fixed-limit" game variant, and 150 individuals participated in a "no-limit" game variant. ANOVA results showed that experts did not outperform average players in terms of final cash balance. Rather, card distribution was the decisive factor for successful poker playing. However, expert players were better able to minimize losses when confronted with disadvantageous conditions (i.e., worse-than-average cards). No significant differences were observed between the game variants. Furthermore, supplementary analyses confirm differential game-related actions dependent on the card distribution, player status, and game variant. In conclusion, the study findings indicate that poker should be regarded as a game of chance, at least under certain basic conditions, and suggest new directions for further research.
40 CFR 467.24 - New source performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES AND STANDARDS (CONTINUED) ALUMINUM FORMING POINT SOURCE CATEGORY Rolling With Emulsions... day Maximum for monthly average mg/off-kg (lb/million off-lbs) of aluminum rolled with emulsions...
40 CFR 467.24 - New source performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES AND STANDARDS (CONTINUED) ALUMINUM FORMING POINT SOURCE CATEGORY Rolling With Emulsions... day Maximum for monthly average mg/off-kg (lb/million off-lbs) of aluminum rolled with emulsions...
40 CFR 467.24 - New source performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS (CONTINUED) ALUMINUM FORMING POINT SOURCE CATEGORY Rolling With Emulsions... day Maximum for monthly average mg/off-kg (lb/million off-lbs) of aluminum rolled with emulsions...
NASA Astrophysics Data System (ADS)
Orwat, J.
2018-01-01
In paper were presented results of average values calculations of terrain curvatures measured after the termination of subsequent exploitation stages in the 338/2 coal bed located at medium depth. The curvatures were measured on the neighbouring segments of measuring line No. 1 established perpendicularly to the runways of four longwalls No. 001, 002, 005 and 007. The average courses of measured curvatures were designated based on average courses of measured inclinations. In turn, the average values of observed inclinations were calculated on the basis of measured subsidence average values. In turn, they were designated on the way of average-square approximation, which was done by the use of smoothed splines, in reference to the theoretical courses determined by the S. Knothe’s and J. Bialek’s formulas. Here were used standard parameters values of a roof rocks subsidence a, an exploitation rim Aobr and an angle of the main influences range β. The values of standard deviations between the average and measured curvatures σC and the variability coefficients of random scattering of curvatures MC were calculated. They were compared with values appearing in the literature and based on this, a possibility appraisal of the use of smooth splines to designation of average course of observed curvatures of mining area was conducted.
Decision analysis with cumulative prospect theory.
Bayoumi, A M; Redelmeier, D A
2000-01-01
Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.
Variability in Wechsler Adult Intelligence Scale-IV subtest performance across age.
Wisdom, Nick M; Mignogna, Joseph; Collins, Robert L
2012-06-01
Normal Wechsler Adult Intelligence Scale (WAIS)-IV performance relative to average normative scores alone can be an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs with increasing age. The purpose of the present study is to characterize the patterns of raw score change and associated variability on WAIS-IV subtests across age groupings. Raw WAIS-IV subtest means and standard deviations for each age group were tabulated from the WAIS-IV normative manual along with the coefficient of variation (CV), a measure of score dispersion calculated by dividing the standard deviation by the mean and multiplying by 100. The CV further informs the magnitude of variability represented by each standard deviation. Raw mean scores predictably decreased across age groups. Increased variability was noted in Perceptual Reasoning and Processing Speed Index subtests, as Block Design, Matrix Reasoning, Picture Completion, Symbol Search, and Coding had CV percentage increases ranging from 56% to 98%. In contrast, Working Memory and Verbal Comprehension subtests were more homogeneous with Digit Span, Comprehension, Information, and Similarities percentage of the mean increases ranging from 32% to 43%. Little change in the CV was noted on Cancellation, Arithmetic, Letter/Number Sequencing, Figure Weights, Visual Puzzles, and Vocabulary subtests (<14%). A thorough understanding of age-related subtest variability will help to identify test limitations as well as further our understanding of cognitive domains which remain relatively steady versus those which steadily decline.
NASA Astrophysics Data System (ADS)
Plank, David M.; Sussman, Mark A.
2005-06-01
Altered intracellular Ca2+ dynamics are characteristically observed in cardiomyocytes from failing hearts. Studies of Ca2+ handling in myocytes predominantly use Fluo-3 AM, a visible light excitable Ca2+ chelating fluorescent dye in conjunction with rapid line-scanning confocal microscopy. However, Fluo-3 AM does not allow for traditional ratiometric determination of intracellular Ca2+ concentration and has required the use of mathematic correction factors with values obtained from separate procedures to convert Fluo-3 AM fluorescence to appropriate Ca2+ concentrations. This study describes methodology to directly measure intracellular Ca2+ levels using inactivated, Fluo-3-AM-loaded cardiomyocytes equilibrated with Ca2+ concentration standards. Titration of Ca2+ concentration exhibits a linear relationship to increasing Fluo-3 AM fluorescence intensity. Images obtained from individual myocyte confocal scans were recorded, average pixel intensity values were calculated, and a plot is generated relating the average pixel intensity to known Ca2+ concentrations. These standard plots can be used to convert transient Ca2+ fluorescence obtained with experimental cells to Ca2+ concentrations by linear regression analysis. Standards are determined on the same microscope used for acquisition of unknown Ca2+ concentrations, simplifying data interpretation and assuring accuracy of conversion values. This procedure eliminates additional equipment, ratiometric imaging, and mathematic correction factors and should be useful to investigators requiring a straightforward method for measuring Ca2+ concentrations in live cells using Ca2+-chelating dyes exhibiting variable fluorescence intensity.
NASA Technical Reports Server (NTRS)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
40 CFR 86.1864-10 - How to comply with the fleet average cold temperature NMHC standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-Use Light-Duty Vehicles, Light-Duty Trucks, and Complete Otto-Cycle Heavy-Duty Vehicles § 86.1864-10... life requirements. Full useful life requirements for cold temperature NMHC standards are defined in § 86.1805-04(g). There is not an intermediate useful life standard for cold temperature NMHC standards...
40 CFR 86.1864-10 - How to comply with the fleet average cold temperature NMHC standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-Use Light-Duty Vehicles, Light-Duty Trucks, and Complete Otto-Cycle Heavy-Duty Vehicles § 86.1864-10... life requirements. Full useful life requirements for cold temperature NMHC standards are defined in § 86.1805-04(g). There is not an intermediate useful life standard for cold temperature NMHC standards...
40 CFR 86.1864-10 - How to comply with the fleet average cold temperature NMHC standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-Use Light-Duty Vehicles, Light-Duty Trucks, and Complete Otto-Cycle Heavy-Duty Vehicles § 86.1864-10... life requirements. Full useful life requirements for cold temperature NMHC standards are defined in § 86.1805-04(g). There is not an intermediate useful life standard for cold temperature NMHC standards...
The State "of" State U.S. History Standards 2011
ERIC Educational Resources Information Center
Stern, Sheldon M.; Stern, Jeremy A.
2011-01-01
This study is the Thomas B. Fordham Institute's first review of the quality of state U.S. history standards since 2003. Key findings include: (1) A majority of states' standards are mediocre-to-awful. The average grade across "all" states is barely a D. In twenty-eight jurisdictions--a majority of states--the history standards earn Ds or…
Alcohol-attributable cancer deaths under 80 years of age in New Zealand.
Connor, Jennie; Kydd, Robyn; Maclennan, Brett; Shield, Kevin; Rehm, Jürgen
2017-05-01
Cancer deaths made up 30% of all alcohol-attributable deaths in New Zealanders aged 15-79 years in 2007, more than all other chronic diseases combined. We aimed to estimate alcohol-attributable cancer mortality and years of life lost by cancer site and identify differences between Māori and non-Māori New Zealanders. We applied the World Health Organization's comparative risk assessment methodology at the level of Māori and non-Māori subpopulations. Proportions of specific alcohol-related cancers attributable to alcohol were calculated by combining alcohol consumption estimates from representative surveys with relative risks from recent meta-analyses. These proportions were applied to both 2007 and 2012 mortality data. Alcohol consumption was responsible for 4.2% of all cancer deaths under 80 years of age in 2007. An average of 10.4 years of life was lost per person; 12.7 years for Māori and 10.1 years for non-Māori. Half of the deaths were attributable to average consumption of <4 standard drinks per day. Breast cancer comprised 61% of alcohol-attributable cancer deaths in women, and more than one-third of breast cancer deaths were attributable to average consumption of <2 standard drinks per day. Mortality data from 2012 produced very similar findings. Alcohol is an important and modifiable cause of cancer. Risk of cancer increases with higher alcohol consumption, but there is no safe level of drinking. Reduction in population alcohol consumption would reduce cancer deaths. Additional strategies to reduce ethnic disparities in risk and outcome are needed in New Zealand. [Connor J, Kydd R, Maclennan B, Shield K, Rehm J. Alcohol-attributable cancer deaths under 80 years of age in New Zealand. Drug Alcohol Rev 2017;36:415-423]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
Assessing operating characteristics of CAD algorithms in the absence of a gold standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.
2010-04-15
Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range ofmore » operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.« less
NASA Technical Reports Server (NTRS)
Anspaugh, B. E.; Miyahira, T. F.; Weiss, R. S.
1979-01-01
Computed statistical averages and standard deviations with respect to the measured cells for each intensity temperature measurement condition are presented. Display averages and standard deviations of the cell characteristics in a two dimensional array format are shown: one dimension representing incoming light intensity, and another, the cell temperature. Programs for calculating the temperature coefficients of the pertinent cell electrical parameters are presented, and postirradiation data are summarized.
40 CFR 421.266 - Pretreatment standards for new sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg/troy ounce of... day Maximum for monthly average mg/troy ounce of precious metals, in the granulated raw material... average mg/troy ounce of gold produced by cyanide stripping Copper 4.736 2.257 Cyanide (total) 0.740 0.296...
Code of Federal Regulations, 2012 CFR
2012-01-01
... not exceed 6 inches and the average flame time after removal of the flame source may not exceed 15... means. The average burn length may not exceed 8 inches, and the average flame time after removal of the... Standards Institute, 1430 Broadway, New York, NY 10018). If the film travels through ducts, the ducts must...
Code of Federal Regulations, 2011 CFR
2011-01-01
... not exceed 6 inches and the average flame time after removal of the flame source may not exceed 15... means. The average burn length may not exceed 8 inches, and the average flame time after removal of the... Standards Institute, 1430 Broadway, New York, NY 10018). If the film travels through ducts, the ducts must...
Code of Federal Regulations, 2010 CFR
2010-01-01
... not exceed 6 inches and the average flame time after removal of the flame source may not exceed 15... means. The average burn length may not exceed 8 inches, and the average flame time after removal of the... Standards Institute, 1430 Broadway, New York, NY 10018). If the film travels through ducts, the ducts must...
Code of Federal Regulations, 2013 CFR
2013-01-01
... not exceed 6 inches and the average flame time after removal of the flame source may not exceed 15... means. The average burn length may not exceed 8 inches, and the average flame time after removal of the... Standards Institute, 1430 Broadway, New York, NY 10018). If the film travels through ducts, the ducts must...
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2011 CFR
2011-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2014 CFR
2014-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2012 CFR
2012-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
24 CFR Appendix I to Subpart B of... - Definition of Acoustical Quantities
Code of Federal Regulations, 2013 CFR
2013-04-01
... National Standard Specification for Type 1 Sound Level Meters S1.4-1971. Fast time-averaging and A...), somewhat as is the ear. With fast time averaging the sound level meter responds particularly to recent... (iii) The maximum sound level obtained with fast averaging time of a sound level meter exceeds the...
41 CFR 109-38.5103 - Motor vehicle utilization standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Motor vehicle... AVIATION, TRANSPORTATION, AND MOTOR VEHICLES 38-MOTOR EQUIPMENT MANAGEMENT 38.51-Utilization of Motor Equipment § 109-38.5103 Motor vehicle utilization standards. (a) The following average utilization standards...
40 CFR 421.294 - Standards of performance for new sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... achieve the following new source performance standards: (a) Tin smelter SO2 scrubber. NSPS for the Secondary Tin Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average...
40 CFR 421.294 - Standards of performance for new sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... achieve the following new source performance standards: (a) Tin smelter SO2 scrubber. NSPS for the Secondary Tin Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average...
40 CFR 421.294 - Standards of performance for new sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... achieve the following new source performance standards: (a) Tin smelter SO2 scrubber. NSPS for the Secondary Tin Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average...
40 CFR 421.294 - Standards of performance for new sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... achieve the following new source performance standards: (a) Tin smelter SO2 scrubber. NSPS for the Secondary Tin Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average...
40 CFR 421.294 - Standards of performance for new sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... achieve the following new source performance standards: (a) Tin smelter SO2 scrubber. NSPS for the Secondary Tin Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average...
Insights from analysis for harmful and potentially harmful constituents (HPHCs) in tobacco products.
Oldham, Michael J; DeSoi, Darren J; Rimmer, Lonnie T; Wagner, Karl A; Morton, Michael J
2014-10-01
A total of 20 commercial cigarette and 16 commercial smokeless tobacco products were assayed for 96 compounds listed as harmful and potentially harmful constituents (HPHCs) by the US Food and Drug Administration. For each product, a single lot was used for all testing. Both International Organization for Standardization and Health Canada smoking regimens were used for cigarette testing. For those HPHCs detected, measured levels were consistent with levels reported in the literature, however substantial assay variability (measured as average relative standard deviation) was found for most results. Using an abbreviated list of HPHCs, statistically significant differences for most of these HPHCs occurred when results were obtained 4-6months apart (i.e., temporal variability). The assay variability and temporal variability demonstrate the need for standardized analytical methods with defined repeatability and reproducibility for each HPHC using certified reference standards. Temporal variability also means that simple conventional comparisons, such as two-sample t-tests, are inappropriate for comparing products tested at different points in time from the same laboratory or from different laboratories. Until capable laboratories use standardized assays with established repeatability, reproducibility, and certified reference standards, the resulting HPHC data will be unreliable for product comparisons or other decision making in regulatory science. Copyright © 2014 Elsevier Inc. All rights reserved.
Başkan, Ceyda; Köz, Özlem G; Duman, Rahmi; Gökçe, Sabite E; Yarangümeli, Ahmet A; Kural, Gülcan
2016-12-01
The purpose of this study is to examine the demographics, clinical properties, and the relation between white-on-white standard automated perimetry (SAP), short wavelength automated perimetry (SWAP), and optical coherence tomographic (OCT) parameters of patients with ocular hypertension. Sixty-one eyes of 61 patients diagnosed with ocular hypertension in the Ankara Numune Education and Research Hospital ophthalmology unit between January 2010 and January 2011 were included in this study. All patients underwent SAP and SWAP tests with the Humphrey visual field analyser using the 30.2 full-threshold test. Retinal nerve fiber layers (RNFL) and optic nerve heads of patients were evaluated with Stratus OCT. Positive correlation was detected between SAP pattern standard deviation value and average intraocular pressure (P=0.017), maximum intraocular pressure (P=0.009), and vertical cup to disc (C/D) ratio (P=0.009). Positive correlation between SWAP median deviation value with inferior (P=0.032), nasal (P=0.005), 6 o'clock quadrant RNFL thickness (P=0.028), and Imax/Tavg ratio (P=0.023) and negative correlation with Smax/Navg ratio (P=0.005) were detected. There was no correlation between central corneal thickness and peripapillary RNFL thicknesses (P>0.05). There was no relation between SAP median deviation, pattern standard deviation values and RNFL thicknesses and optic disc parameters of the OCT. By contrast significant correlation between several SWAP parameters and OCT parameters were detected. SWAP appeared to outperform achromatic SAP when the same 30-2 method was used.
The Effect of Paid Leave on Maternal Mental Health.
Mandal, Bidisha
2018-06-07
Objectives I examined the relationship between paid maternity leave and maternal mental health among women returning to work within 12 weeks of childbirth, after 12 weeks, and those returning specifically to full-time work within 12 weeks of giving birth. Methods I used data from 3850 women who worked full-time before childbirth from the Early Childhood Longitudinal Study-Birth Cohort. I utilized propensity score matching techniques to address selection bias. Mental health was measured using the Center for Epidemiologic Studies Depression (CESD) scale, with high scores indicating greater depressive symptoms. Results Returning to work after giving birth provided psychological benefits to women who used to work full-time before childbirth. The average CESD score of women who returned to work was 0.15 standard deviation (p < 0.01) lower than the average CESD score of all women who worked full-time before giving birth. Shorter leave, on the other hand, was associated with adverse effects on mental health. The average CESD score of women who returned within 12 weeks of giving birth was 0.13 standard deviation higher (p < 0.05) than the average CESD score of all women who rejoined labor market within 9 months of giving birth. However, receipt of paid leave was associated with an improved mental health outcome. Among all women who returned to work within 12 weeks of childbirth, those women who received some paid leave had a 0.17 standard deviation (p < 0.05) lower CESD score than the average CESD score. The result was stronger for women who returned to full-time work within 12 weeks of giving birth, with a 0.32 standard deviation (p < 0.01) lower CESD score than the average CESD score. Conclusions The study revealed that the negative psychological effect of early return to work after giving birth was alleviated when women received paid leave.