Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Exploring local regularities for 3D object recognition
NASA Astrophysics Data System (ADS)
Tian, Huaiwen; Qin, Shengfeng
2016-11-01
In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.
2008-06-12
15 14 15 STANDARD DEVIATION (:1:) 3 5 4 4 4 MINIMUM VALUE 9 12 11 8 10 MAXIMUM VALUE 20 33 23 29 24 N" 56 19 14 38 28 IMm ~ ~ CORN Oil l:W...AVERAGE 9 9 10 8 9 STANDARD DEVIATION (:1:) 3 3 4 2 3 MINIMUM VAlUE 2 6 3 5 4 MAXIMUM VAlUE 20 16 23 12 15 N" 65 21 14 33 29 E.COLI DMSO ~ CORN ...21 33 25 21 23 N* 66 19 14 38 28 :wm ~ ACET CORN Oil !L!! SAUNE AVERAGE 9 10 10 9 8 STANDARD DEVlA.11ON (:I:) 3 3 4 3 2 MINIMUM VAWS . 4 6 6 6 3
49 CFR 192.943 - When can an operator deviate from these reassessment intervals?
Code of Federal Regulations, 2010 CFR
2010-10-01
... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.943 When can an operator deviate from these reassessment...
49 CFR 192.913 - When may an operator deviate its program from certain requirements of this subpart?
Code of Federal Regulations, 2010 CFR
2010-10-01
... Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.913 When may an operator deviate its program...
Organizational Deviance and Multi-Factor Leadership
ERIC Educational Resources Information Center
Aksu, Ali
2016-01-01
Organizational deviant behaviors can be defined as behaviors that have deviated from standards and uncongenial to organization's expectations. When such behaviors have been thought to damage the organization, it can be said that reducing the deviation behaviors at minimum level is necessary for a healthy organization. The aim of this research is…
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-10-03
This report is a six-part statistical summary of surface weather observations for Torrejon AB, Madrid Spain. It contains the following parts: (A) Weather Conditions; Atmospheric Phenomena; (B) Precipitation, Snowfall and Snow Depth (daily amounts and extreme values); (C) Surface winds; (D) Ceiling Versus Visibility; Sky Cover; (E) Psychrometric Summaries (daily maximum and minimum temperatures, extreme maximum and minimum temperatures, psychrometric summary of wet-bulb temperature depression versus dry-bulb temperature, means and standard deviations of dry-bulb, wet-bulb and dew-point temperatures and relative humidity); and (F) Pressure Summary (means, standard, deviations, and observation counts of station pressure and sea-level pressure). Data in thismore » report are presented in tabular form, in most cases in percentage frequency of occurrence or cumulative percentage frequency of occurrence tables.« less
NASA Astrophysics Data System (ADS)
Osipova, Irina Y.; Chyzh, Igor H.
2001-06-01
The influence of eye jumps on the accuracy of estimation of Zernike coefficients from eye transverse aberration measurements was investigated. By computer modeling the ametropy and astigmatism have been examined. The standard deviation of the wave aberration function was calculated. It was determined that the standard deviation of the wave aberration function achieves the minimum value if the number of scanning points is equal to the number of eye jumps in scanning period. The recommendations for duration of measurement were worked out.
NASA Astrophysics Data System (ADS)
Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.
1999-05-01
Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Pulse height response of an optical particle counter to monodisperse aerosols
NASA Technical Reports Server (NTRS)
Wilmoth, R. G.; Grice, S. S.; Cuda, V.
1976-01-01
The pulse height response of a right angle scattering optical particle counter has been investigated using monodisperse aerosols of polystyrene latex spheres, di-octyl phthalate and methylene blue. The results confirm previous measurements for the variation of mean pulse height as a function of particle diameter and show good agreement with the relative response predicted by Mie scattering theory. Measured cumulative pulse height distributions were found to fit reasonably well to a log normal distribution with a minimum geometric standard deviation of about 1.4 for particle diameters greater than about 2 micrometers. The geometric standard deviation was found to increase significantly with decreasing particle diameter.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
49 CFR 192.1013 - When may an operator deviate from required periodic inspections under this part?
Code of Federal Regulations, 2011 CFR
2011-10-01
... to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Distribution Pipeline Integrity Management (IM) § 192.1013 When may an operator...
49 CFR 192.1013 - When may an operator deviate from required periodic inspections under this part?
Code of Federal Regulations, 2013 CFR
2013-10-01
... to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Distribution Pipeline Integrity Management (IM) § 192.1013 When may an operator...
49 CFR 192.1013 - When may an operator deviate from required periodic inspections under this part?
Code of Federal Regulations, 2012 CFR
2012-10-01
... to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Distribution Pipeline Integrity Management (IM) § 192.1013 When may an operator...
49 CFR 192.1013 - When may an operator deviate from required periodic inspections under this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
... to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Distribution Pipeline Integrity Management (IM) § 192.1013 When may an operator...
Cybułka, Bartosz
2017-04-30
With current technological advancement and availability of synthetic materials used in inguinal hernia repair, a recurrence after first intervention is not a common and important adverse event. On the other hand, however, some patients complain about chronic pain of the operated site after surgeries using a polypropylene mesh. Many patients are constrained to a prolonged use of analgesics and increased frequency of control visits, which may eventually result in loss of trust in the operator. Every surgical intervention is associated with the risk of immediate or delayed complications. Genitofemoral neuralgia is associated with dysfunction of peripheral nerves passing through the inguinal canal or the surrounding tissue and it is a chronic, troublesome and undesired complication of an inguinal hernia repair. The possibility of minimizing chronic inguinal pain by proper management during herniorraphy should be considered in all cases of an inguinal canal reconstruction. The aim of the study was to investigate whether an intraoperative injection of 0.5% bupivacaine into the operated site (preemptive analgesia) has an influence on the postoperative pain assessed on the day of operation as well as the 1st and 2nd postoperative day after Lichtenstein hernioplasty of an inguinal, scrotal or recurrent hernia. In the studied population, we attempted to identify risk factors affecting pain level after surgical repair of an inguinal, scrotal or recurrent hernia. During the period between December 2015 and May 2016, 133 patients with preoperative diagnosis of an inguinal (81.95%, n=109), scrotal (13.53%, n=18) or recurrent hernia (4.51%, n=6) underwent an elective intervention and were randomly allocated to the group, which intraoperatively received 20 mL of 0.5% bupivacaine locally in selected anatomical points of the inguinal canal. In the group with preoperative diagnosis of an inguinal hernia, this intervention was applied in 56.88% of cases (n=62). In the case of scrotal or recurrent hernia, a similar intervention was applied in 41.67% (n=10) of patients. During the hospital stay, pain was assessed four times a day using the NRS numeric scale. All patients received preoperative antibiotic prophylaxis, and, during observation, analgesics and low-molecular-weight heparin were used. In the studied group, risk factor were identified, which affect the pain level associated with surgical treatment of an inguinal hernia. Mean pain level score according to the NRS scale (0-10) for an inguinal hernia was 4.17 on day 0 (standard deviation 2.22; minimum 0; maximum 10). On day 1 - 2.86 (standard deviation 1.86; minimum 0; maximum 8). On day 2 - 0.84 (standard deviation 1.21; minimum 0; maximum 5). The values of those parameters for a scrotal and recurrent hernia were as follows: on day 0 - 3.67 (standard deviation 1.76; minimum 0; maximum 7). On day 1 - 3.79 (standard deviation 1.67; minimum 0; maximum 7). On day 2 - 2.25 (standard deviation 1.54; minimum 0; maximum 4). Intraoperative application of 20 mL 0.5% bupivacaine did not reduce the postoperative pain on the postoperative day 0, 1, 2. Among independent risk factors exacerbating pain, the following variables were identified: local complications of the operated site including edema, ecchymosis and hematoma of the inguinal region. More frequent dressing changes were directly correlated with an increased pain sensation. Postoperative urethral catheterization due to urinary retention was associated with an increased pain immediately after surgery. In the case of intraoperative diagnosis of concurrent direct and indirect hernia (so-called pantaloon hernia), less intense pain was observed on postoperative day 0. Other parameters such as age, sex, duration of operation, duration of hospitalization and wound drainage did not influence the pain sensation. Local injection of an analgesic into the operated site was not associated with the reduction of pain assessed on postoperative day 0, 1 and 2 after an isolated inguinal, scrotal or recurrent hernia repair. Pathologies of the operated site such as edema, ecchymosis or hematoma were associated with an increased pain sensations on observation. Also, postoperative urinary retention and urethral catheterization increased the pain sensation after an inguinal hernia repair. A lack of wound complications significantly decreased the pain sensation during the immediate postoperative period after hernia repair.
Reeves, Aaron; McKee, Martin; Mackenbach, Johan; Whitehead, Margaret; Stuckler, David
2017-05-01
Does increasing incomes improve health? In 1999, the UK government implemented minimum wage legislation, increasing hourly wages to at least £3.60. This policy experiment created intervention and control groups that can be used to assess the effects of increasing wages on health. Longitudinal data were taken from the British Household Panel Survey. We compared the health effects of higher wages on recipients of the minimum wage with otherwise similar persons who were likely unaffected because (1) their wages were between 100 and 110% of the eligibility threshold or (2) their firms did not increase wages to meet the threshold. We assessed the probability of mental ill health using the 12-item General Health Questionnaire. We also assessed changes in smoking, blood pressure, as well as hearing ability (control condition). The intervention group, whose wages rose above the minimum wage, experienced lower probability of mental ill health compared with both control group 1 and control group 2. This improvement represents 0.37 of a standard deviation, comparable with the effect of antidepressants (0.39 of a standard deviation) on depressive symptoms. The intervention group experienced no change in blood pressure, hearing ability, or smoking. Increasing wages significantly improves mental health by reducing financial strain in low-wage workers. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes
NASA Astrophysics Data System (ADS)
Białek, Małgorzata
2006-03-01
Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.
Assessment of corneal epithelial thickness in dry eye patients.
Cui, Xinhan; Hong, Jiaxu; Wang, Fei; Deng, Sophie X; Yang, Yujing; Zhu, Xiaoyu; Wu, Dan; Zhao, Yujin; Xu, Jianjiang
2014-12-01
To investigate the features of corneal epithelial thickness topography with Fourier-domain optical coherence tomography (OCT) in dry eye patients. In this cross-sectional study, 100 symptomatic dry eye patients and 35 normal subjects were enrolled. All participants answered the ocular surface disease index questionnaire and were subjected to OCT, corneal fluorescein staining, tear breakup time, Schirmer 1 test without anesthetic (S1t), and meibomian morphology. Several epithelium statistics for each eye, including central, superior, inferior, minimum, maximum, minimum - maximum, and map standard deviation, were averaged. Correlations of epithelial thickness with the symptoms of dry eye were calculated. The mean (±SD) central, superior, and inferior corneal epithelial thickness was 53.57 (±3.31) μm, 52.00 (±3.39) μm, and 53.03 (±3.67) μm in normal eyes and 52.71 (±2.83) μm, 50.58 (±3.44) μm, and 52.53 (±3.36) μm in dry eyes, respectively. The superior corneal epithelium was thinner in dry eye patients compared with normal subjects (p = 0.037), whereas central and inferior epithelium were not statistically different. In the dry eye group, patients with higher severity grades had thinner superior (p = 0.017) and minimum (p < 0.001) epithelial thickness, more wide range (p = 0.032), and greater deviation (p = 0.003). The average central epithelial thickness had no correlation with tear breakup time, S1t, or the severity of meibomian glands, whereas average superior epithelial thickness positively correlated with S1t (r = 0.238, p = 0.017). Fourier-domain OCT demonstrated that the thickness map of the dry eye corneal epithelium was thinner than normal eyes in the superior region. In more severe dry eye disease patients, the superior and minimum epithelium was much thinner, with a greater range of map standard deviation.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
Efficiency and large deviations in time-asymmetric stochastic heat engines
Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...
2014-10-24
In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
Babb, James; Xia, Ding; Chang, Gregory; Krasnokutsky, Svetlana; Abramson, Steven B.; Jerschow, Alexej; Regatte, Ravinder R.
2013-01-01
Purpose: To assess the potential use of sodium magnetic resonance (MR) imaging of cartilage, with and without fluid suppression by using an adiabatic pulse, for classifying subjects with versus subjects without osteoarthritis at 7.0 T. Materials and Methods: The study was approved by the institutional review board and was compliant with HIPAA. The knee cartilage of 19 asymptomatic (control subjects) and 28 symptomatic (osteoarthritis patients) subjects underwent 7.0-T sodium MR imaging with use of two different sequences: one without fluid suppression (radial three-dimensional sequence) and one with fluid suppression (inversion recovery [IR] wideband uniform rate and smooth truncation [WURST]). Fluid suppression was obtained by using IR with an adiabatic inversion pulse (WURST pulse). Mean sodium concentrations and their standard deviations were measured in the patellar, femorotibial medial, and lateral cartilage regions over four consecutive sections for each subject. The minimum, maximum, median, and average means and standard deviations were calculated over all measurements for each subject. The utility of these measures in the detection of osteoarthritis was evaluated by using logistic regression and the area under the receiver operating characteristic curve (AUC). Bonferroni correction was applied to the P values obtained with logistic regression. Results: Measurements from IR WURST were found to be significant predicators of all osteoarthritis (Kellgren-Lawrence score of 1–4) and early osteoarthritis (Kellgren-Lawrence score of 1 or 2). The minimum standard deviation provided the highest AUC (0.83) with the highest accuracy (>78%), sensitivity (>82%), and specificity (>74%) for both all osteoarthritis and early osteoarthritis groups. Conclusion: Quantitative sodium MR imaging at 7.0 T with fluid suppression by using adiabatic IR is a potential biomarker for osteoarthritis. © RSNA, 2013 PMID:23468572
McKenna, D; Kadidlo, D; Sumstad, D; McCullough, J
2003-01-01
Errors and accidents, or deviations from standard operating procedures, other policy, or regulations must be documented and reviewed, with corrective actions taken to assure quality performance in a cellular therapy laboratory. Though expectations and guidance for deviation management exist, a description of the framework for the development of such a program is lacking in the literature. Here we describe our deviation management program, which uses a Microsoft Access database and Microsoft Excel to analyze deviations and notable events, facilitating quality assurance (QA) functions and ongoing process improvement. Data is stored in a Microsoft Access database with an assignment to one of six deviation type categories. Deviation events are evaluated for potential impact on patient and product, and impact scores for each are determined using a 0- 4 grading scale. An immediate investigation occurs, and corrective actions are taken to prevent future similar events from taking place. Additionally, deviation data is collectively analyzed on a quarterly basis using Microsoft Excel, to identify recurring events or developing trends. Between January 1, 2001 and December 31, 2001 over 2500 products were processed at our laboratory. During this time period, 335 deviations and notable events occurred, affecting 385 products and/or patients. Deviations within the 'technical error' category were most common (37%). Thirteen percent of deviations had a patient and/or a product impact score > or = 2, a score indicating, at a minimum, potentially affected patient outcome or moderate effect upon product quality. Real-time analysis and quarterly review of deviations using our deviation management program allows for identification and correction of deviations. Monitoring of deviation trends allows for process improvement and overall successful functioning of the QA program in the cell therapy laboratory. Our deviation management program could serve as a model for other laboratories in need of such a program.
Zhao, Pengxiang; Zhou, Suhong
2018-01-01
Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals’ activity space. First, a survey was conducted to collect individuals’ daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment. PMID:29439392
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region
NASA Astrophysics Data System (ADS)
Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.
2005-08-01
Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
Training Methods and Tactical Decision-Making Simulations
2007-09-01
TDS TDG ALL Standard Deviation 100.60 14.85 87.40 Puzzle, Card , Board Subjects Responding 5 3 8 Total # of Hours/Year 805 774 1579 Minimum...Table 7 shows that participants had the most commercial game experience with puzzle, card , board, and adventure/fantasy type games. Participants...circle all that apply) 1. first person shooter 2. flight simulations 3. racing 4. other sports 5. puzzle, strategy, card , board
Code of Federal Regulations, 2014 CFR
2014-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
Code of Federal Regulations, 2013 CFR
2013-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
Code of Federal Regulations, 2011 CFR
2011-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
Code of Federal Regulations, 2012 CFR
2012-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
ERIC Educational Resources Information Center
Waldenstrom, S.; Naqvi, K. Razi
1978-01-01
Proposes an alternative to the classical minimum-deviation method for determining the refractive index of a prism. This new "fixed angle of incidence method" may find applications in research. (Author/GA)
Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538
Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.
S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions
NASA Technical Reports Server (NTRS)
Krishen, K.; Pounds, D. J.
1975-01-01
Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.
Relative air temperature analysis external building on Gowa Campus
NASA Astrophysics Data System (ADS)
Mustamin, Tayeb; Rahim, Ramli; Baharuddin; Jamala, Nurul; Kusno, Asniawaty
2018-03-01
This study aims to data analyze the relative temperature and humidity of the air outside the building. Data retrieval taken from weather monitoring device (monitoring) Vaisala, RTU (Remote Terminal Unit), Which is part of the AWS (Automatic Weather Stations) Then Processing data processed and analyzed by using Microsoft Excel program in the form of graph / picture fluctuation Which shows the average value, standard deviation, maximum value, and minimum value. Results of data processing then grouped in the form: Daily, and monthly, based on time intervals every 30 minutes. The results showed Outside air temperatures in March, April, May and September 2016 Which entered in the thermal comfort zone according to SNI standard (Indonesian National Standard) only at 06.00-10.00. In late March to early April Thermal comfort zone also occurs at 15.30-18.00. The highest maximum air temperature occurred in September 2016 at 11.01-11.30 And the lowest minimum value in September 2016, time 6:00 to 6:30. The result of the next analysis shows the level of data conformity with thermal comfort zone based on SNI (Indonesian National Standard) every month.
Temperature effects on wavelength calibration of the optical spectrum analyzer
NASA Astrophysics Data System (ADS)
Mongkonsatit, Kittiphong; Ranusawud, Monludee; Srikham, Sitthichai; Bhatranand, Apichai; Jiraraksopakun, Yuttapong
2018-03-01
This paper presents the investigation of the temperature effects on wavelength calibration of an optical spectrum analyzer or OSA. The characteristics of wavelength dependence on temperatures are described and demonstrated under the guidance of the IEC 62129-1:2006, the international standard for the Calibration of wavelength/optical frequency measurement instruments - Part 1: Optical spectrum analyzer. Three distributed-feedback lasers emit lights with wavelengths of 1310 nm, 1550 nm, and 1600 nm were used as light sources in this work. Each light was split by a 1 x 2 fiber splitter whereas one end was connected to a standard wavelength meter and the other to an under-test OSA. Two Experiment setups were arranged for the analysis of the wavelength reading deviations between a standard wavelength meter and an OSA under a variety of circumstances of different temperatures and humidity conditions. The experimental results showed that, for wavelengths of 1550 nm and 1600 nm, the wavelength deviations were proportional to the value of temperature with the minimum and maximum of -0.015 and 0.030 nm, respectively. While the deviations of 1310 nm wavelength did not change much with the temperature as they were in the range of -0.003 nm to 0.010 nm. The measurement uncertainty was also evaluated according to the IEC 62129-1:2006. The main contribution of measurement uncertainty was caused by the wavelength deviation. The uncertainty of measurement in this study is 0.023 nm with coverage factor, k = 2.
Analysis of Nuclear Propagation Effects Utilizing Wideband Satellite Data.
1981-04-01
integrated phase spectral energy on scales shorter than - 30 km. Like the TEC, the standard deviation of phase depends on the effective thickness of...Vila, P., "Etude Experimentale de L’Anomalie lonospherique Equatoriale en Afrique en Periode de Minimum Solaire ," Annales de Geophysique, Vol. 22, No...AFRDSP AT[: OP 65 ATTN: AFRDSS 82 DEPARTMENT OF THE AIR FORCE (Continued) DEPARTMENT OF ENERGY CONTRACTORS Electronic Systems Div EG&U, Inc Air force
NASA Astrophysics Data System (ADS)
Kuruliuk, K. A.; Kulesh, V. P.
2016-10-01
An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.
Irlenbusch, Ulrich; Berth, Alexander; Blatter, Georges; Zenz, Peter
2012-03-01
Most anthropometric data on the proximal humerus has been obtained from deceased healthy individuals with no deformities. Endoprostheses are implanted for primary and secondary osteoarthritis, rheumatoid arthritis,humeral-head necrosis, fracture sequelae and other humeral-head deformities. This indicates that pathologicoanatomical variability may be greater than previously assumed. We therefore investigated a group of patients with typical shoulder replacement diagnoses, including posttraumatic and rheumatic deformities. One hundred and twenty-two patients with a double eccentrically adjustable shaft endoprosthesis served as a specific dimension gauge to determine in vivo the individual humeral-head rotation centres from the position of the adjustable prosthesis taper and the eccentric head. All prosthesis heads were positioned eccentrically.The entire adjustment range of the prosthesis of 12 mm medial/lateral and 6 mm dorsal/ventral was required. Mean values for effective offset were 5.84 mm mediolaterally[standard deviation (SD) 1.95, minimum +2, maximum +11]and 1.71 mm anteroposteriorly (SD 1.71, minimum −3,maximum 3 mm), averaging 5.16 mm (SD 1.76, minimum +2,maximum + 10). The posterior offset averaged 1.85 mm(SD 1.85, minimum −1, maximum + 6 mm). In summary, variability of the combined medial and dorsal offset of the humeral-head rotational centre determined in patients with typical underlying diagnoses in shoulder replacement was not greater than that recorded in the literature for healthy deceased patients.The range of deviation is substantial and shows the need for an adjustable prosthetic system.
Lourenço, Anália; Coenye, Tom; Goeres, Darla M; Donelli, Gianfranco; Azevedo, Andreia S; Ceri, Howard; Coelho, Filipa L; Flemming, Hans-Curt; Juhna, Talis; Lopes, Susana P; Oliveira, Rosário; Oliver, Antonio; Shirtliff, Mark E; Sousa, Ana M; Stoodley, Paul; Pereira, Maria Olivia; Azevedo, Nuno F
2014-04-01
The minimum information about a biofilm experiment (MIABiE) initiative has arisen from the need to find an adequate and scientifically sound way to control the quality of the documentation accompanying the public deposition of biofilm-related data, particularly those obtained using high-throughput devices and techniques. Thereby, the MIABiE consortium has initiated the identification and organization of a set of modules containing the minimum information that needs to be reported to guarantee the interpretability and independent verification of experimental results and their integration with knowledge coming from other fields. MIABiE does not intend to propose specific standards on how biofilms experiments should be performed, because it is acknowledged that specific research questions require specific conditions which may deviate from any standardization. Instead, MIABiE presents guidelines about the data to be recorded and published in order for the procedure and results to be easily and unequivocally interpreted and reproduced. Overall, MIABiE opens up the discussion about a number of particular areas of interest and attempts to achieve a broad consensus about which biofilm data and metadata should be reported in scientific journals in a systematic, rigorous and understandable manner. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
Refractive index and birefringence of 2H silicon carbide
NASA Technical Reports Server (NTRS)
Powell, J. A.
1972-01-01
The refractive indices of 2H SiC were measured over the wavelength range 435.8 to 650.9 nm by the method of minimum deviation. At the wavelength lambda = 546.1 nm, the ordinary index n sub 0 was 2.6480 and the extraordinary index n sub e was 2.7237. The estimated error (standard deviation) in the measured values is 0.0006 for n sub 0 and 0.0009 for n sub e. The experimental data were curve fitted to the Cauchy equation for the index of refraction as a function of wavelength. The birefringence of 2H SiC was found to vary from 0.0719 at lambda = 650.9 nm to 0.0846 at lambda = 435.8 nm.
NASA Astrophysics Data System (ADS)
Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.
2016-03-01
Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.
Quantification of functional abilities in Rett syndrome: a comparison between stages III and IV
Monteiro, Carlos BM; Savelsbergh, Geert JP; Smorenburg, Ana RP; Graciani, Zodja; Torriani-Pasin, Camila; de Abreu, Luiz Carlos; Valenti, Vitor E; Kok, Fernando
2014-01-01
We aimed to evaluate the functional abilities of persons with Rett syndrome (RTT) in stages III and IV. The group consisted of 60 females who had been diagnosed with RTT: 38 in stage III, mean age (years) of 9.14, with a standard deviation of 5.84 (minimum 2.2/maximum 26.4); and 22 in stage IV, mean age of 12.45, with a standard deviation of 6.17 (minimum 5.3/maximum 26.9). The evaluation was made using the Pediatric Evaluation of Disability Inventory, which has 197 items in the areas of self-care, mobility, and social function. The results showed that in the area of self-care, stage III and stage IV RTT persons had a level of 24.12 and 18.36 (P=0.002), respectively. In the area of mobility, stage III had 37.22 and stage IV had 14.64 (P<0.001), while in the area of social function, stage III had 17.72 and stage IV had 12.14 (P=0.016). In conclusion, although persons with stage III RTT have better functional abilities when compared with stage IV, the areas of mobility, self-care, and social function are quite affected, which shows a great functional dependency and need for help in basic activities of daily life. PMID:25061307
Relationship between fluid bed aerosol generator operation and the aerosol produced
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, R.L.; Yerkes, K.
1980-12-01
The relationships between bed operation in a fluid bed aerosol generator and aerosol output were studied. A two-inch diameter fluid bed aerosol generator (FBG) was constructed using stainless steel powder as a fluidizing medium. Fly ash from coal combustion was aerosolized and the influence of FBG operating parameters on aerosol mass median aerodynamic diameter (MMAD), geometric standard deviation (sigma/sub g/) and concentration was examined. In an effort to extend observations on large fluid beds to small beds using fine bed particles, minimum fluidizing velocities and elutriation constant were computed. Although FBG minimum fluidizing velocity agreed well with calculations, FBG elutriationmore » constant did not. The results of this study show that the properties of aerosols produced by a FBG depend on fluid bed height and air flow through the bed after the minimum fluidizing velocity is exceeded.« less
Schroeder, A A; Ford, N L; Coil, J M
2017-03-01
To determine whether post space preparation deviated from the root canal preparation in canals filled with Thermafil, GuttaCore or warm vertically compacted gutta-percha. Forty-two extracted human permanent maxillary lateral incisors were decoronated, and their root canals instrumented using a standardized protocol. Samples were divided into three groups and filled with Thermafil (Dentsply Tulsa Dental Specialties, Johnson City, TN, USA), GuttaCore (Dentsply Tulsa Dental Specialties) or warm vertically compacted gutta-percha, before post space preparation was performed with a GT Post drill (Dentsply Tulsa Dental Specialties). Teeth were scanned using micro-computed tomography after root filling and again after post space preparation. Scans were examined for number of samples with post space deviation, linear deviation of post space preparation and minimum root thickness before and after post space preparation. Parametric data were analysed with one-way analysis of variance (anova) or one-tailed paired Student's t-tests, whilst nonparametric data were analysed with Fisher's exact test. Deviation occurred in eight of forty-two teeth (19%), seven of fourteen from the Thermafil group (50%), one of fourteen from the GuttaCore group (7%), and none from the gutta-percha group. Deviation occurred significantly more often in the Thermafil group than in each of the other two groups (P < 0.05). Linear deviation of post space preparation was greater in the Thermafil group than in both of the other groups and was significantly greater than that of the gutta-percha group (P < 0.05). Minimum root thickness before post space preparation was significantly greater than it was after post space preparation for all groups (P < 0.01). The differences between the Thermafil, GuttaCore and gutta-percha groups in the number of samples with post space deviation and in linear deviation of post space preparation were associated with the presence or absence of a carrier as well as the different carrier materials. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Damrau, D.L.
1993-01-01
Increased awareness of the quality of water in the United States has led to the development of a method for determining low levels (0.2-5.0 microg/L) of silver in water samples. Use of graphite furnace atomic absorption spectrophotometry provides a sensitive, precise, and accurate method for determining low-level silver in samples of low ionic-strength water, precipitation water, and natural water. The minimum detection limit determined for low-level silver is 0.2 microg/L. Precision data were collected on natural-water samples and SRWS (Standard Reference Water Samples). The overall percent relative standard deviation for natural-water samples with silver concentrations more than 0.2 microg/L was less than 40 percent throughout the analytical range. For the SRWS with concentrations more than 0.2 microg/L, the overall percent relative standard deviation was less than 25 percent throughout the analytical range. The accuracy of the results was determined by spiking 6 natural-water samples with different known concentrations of the silver standard. The recoveries ranged from 61 to 119 percent at the 0.5-microg/L spike level. At the 1.25-microg/L spike level, the recoveries ranged from 92 to 106 percent. For the high spike level at 3.0 microg/L, the recoveries ranged from 65 to 113 percent. The measured concentrations of silver obtained from known samples were within the Branch of Quality Assurance accepted limits of 1 1/2 standard deviations on the basis of the SRWS program for Inter-Laboratory studies.
NASA Astrophysics Data System (ADS)
Kürbis, K.; Mudelsee, M.; Tetzlaff, G.; Brázdil, R.
2009-09-01
For the analysis of trends in weather extremes, we introduce a diagnostic index variable, the exceedance product, which combines intensity and frequency of extremes. We separate trends in higher moments from trends in mean or standard deviation and use bootstrap resampling to evaluate statistical significances. The application of the concept of the exceedance product to daily meteorological time series from Potsdam (1893 to 2005) and Prague-Klementinum (1775 to 2004) reveals that extremely cold winters occurred only until the mid-20th century, whereas warm winters show upward trends. These changes were significant in higher moments of the temperature distribution. In contrast, trends in summer temperature extremes (e.g., the 2003 European heatwave) can be explained by linear changes in mean or standard deviation. While precipitation at Potsdam does not show pronounced trends, dew point does exhibit a change from maximum extremes during the 1960s to minimum extremes during the 1970s.
Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1994-01-01
The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.
Pope, Larry M.; Diaz, A.M.
1982-01-01
Quality-of-water data, collected October 21-23, 1980, and a statistical summary are presented for 42 coal-mined strip pits in Crawford and Cherokee Counties, Southeastern Kansas. The statistical summary includes minimum and maximum observed values , mean, and standard deviation. Simple linear regression equations relating specific conductance, dissolved solids, and acidity to concentrations of dissolved solids, sulfate, calcium, and magnesium, potassium, aluminum, and iron are also presented. (USGS)
Predictive model for disinfection by-product in Alexandria drinking water, northern west of Egypt.
Abdullah, Ali M; Hussona, Salah El-dien
2013-10-01
Chlorine has been utilized in the early stages of water treatment processes as disinfectant. Disinfection for drinking water reduces the risk of pathogenic infection but may pose a chemical threat to human health due to disinfection residues and their by-products (DBP) when the organic and inorganic precursors are present in water. In the last two decades, many modeling attempts have been made to predict the occurrence of DBP in drinking water. Models have been developed based on data generated in laboratory-scale and field-scale investigations. The objective of this paper is to develop a predictive model for DBP formation in the Alexandria governorate located at the northern west of Egypt based on field-scale investigations as well as laboratory-controlled experimentations. The present study showed that the correlation coefficient between trihalomethanes (THM) predicted and THM measured was R (2)=0.88 and the minimum deviation percentage between THM predicted and THM measured was 0.8 %, the maximum deviation percentage was 89.3 %, and the average deviation was 17.8 %, while the correlation coefficient between dichloroacetic acid (DCAA) predicted and DCAA measured was R (2)=0.98 and the minimum deviation percentage between DCAA predicted and DCAA measured was 1.3 %, the maximum deviation percentage was 47.2 %, and the average deviation was 16.6 %. In addition, the correlation coefficient between trichloroacetic acid (TCAA) predicted and TCAA measured was R (2)=0.98 and the minimum deviation percentage between TCAA predicted and TCAA measured was 4.9 %, the maximum deviation percentage was 43.0 %, and the average deviation was 16.0 %.
Piva, Sara R.; Gil, Alexandra B.; Moore, Charity G.; Fitzgerald, G. Kelley
2016-01-01
Objective To assess internal and external responsiveness of the Activity of Daily Living Scale of the Knee Outcome Survey and Numeric Pain Rating Scale on patients with patellofemoral pain. Design One group pre-post design. Subjects A total of 60 individuals with patellofemoral pain (33 women; mean age 29.9 (standard deviation 9.6) years). Methods The Activity of Daily Living Scale and the Numeric Pain Rating Scale were assessed before and after 8 weeks of physical therapy program. Patients completed a global rating of change scale at the end of therapy. The standardized effect size, Guyatt responsiveness index, and the minimum clinical important difference were calculated. Results Standardized effect size of the Activity of Daily Living Scale was 0.63, Guyatt responsiveness index was 1.4, area under the curve was 0.83 (95% confidence interval: 0.72, 0.94), and the minimum clinical important difference corresponded to an increase of 7.1 percentile points. Standardized effect size of the Numeric Pain Rating Scale was 0.72, Guyatt responsiveness index was 2.2, area under the curve was 0.80 (95% confidence interval: 0.70, 0.92), and the minimum clinical important difference corresponded to a decrease of 1.16 points. Conclusion Information from this study may be helpful to therapists when evaluating the effectiveness of rehabilitation intervention on physical function and pain, and to power future clinical trials on patients with patellofemoral pain. PMID:19229444
Piva, Sara R; Gil, Alexandra B; Moore, Charity G; Fitzgerald, G Kelley
2009-02-01
To assess internal and external responsiveness of the Activity of Daily Living Scale of the Knee Outcome Survey and Numeric Pain Rating Scale on patients with patellofemoral pain. One group pre-post design. A total of 60 individuals with patellofemoral pain (33 women; mean age 29.9 (standard deviation 9.6) years). The Activity of Daily Living Scale and the Numeric Pain Rating Scale were assessed before and after 8 weeks of physical therapy program. Patients completed a global rating of change scale at the end of therapy. The standardized effect size, Guyatt responsiveness index, and the minimum clinical important difference were calculated. Standardized effect size of the Activity of Daily Living Scale was 0.63, Guyatt responsiveness index was 1.4, area under the curve was 0.83 (95% confidence interval: 0.72, 0.94), and the minimum clinical important difference corresponded to an increase of 7.1 percentile points. Standardized effect size of the Numeric Pain Rating Scale was 0.72, Guyatt responsiveness index was 2.2, area under the curve was 0.80 (95% confidence interval: 0.70, 0.92), and the minimum clinical important difference corresponded to a decrease of 1.16 points. Information from this study may be helpful to therapists when evaluating the effectiveness of rehabilitation intervention on physical function and pain, and to power future clinical trials on patients with patellofemoral pain.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
NASA Astrophysics Data System (ADS)
Rodrigo, Fernando S.
2010-05-01
In this work, a reconstruction of winter rainfall and temperature in Andalusia (southern Iberia Peninsula) during the period 1750-1850 is presented. The reconstruction is based on the analysis of a wide variety of documentary data. This period is interesting because it is characterized by a minimum in the solar irradiance (Dalton Minimum, around 1800), as well as intense volcanic activity (for instance, the eruption of the Tambora in 1815), when the increasing atmospheric CO2 concentrations were of minor importance. The reconstruction methodology is based on accounting the number of extreme events in past, and inferring mean value and standard deviation using the assumption of normal distribution for the climate variables. Results are compared with the behaviour of regional series for the reference period 1960-1990. The comparison of the distribution functions corresponding to 1790-1820 and 1960-1990 periods indicates that during the Dalton Minimum the frequency of droughts and warm winters was lesser than during the reference period, while the frequencies of wet and cold winters were similar. Future research work is outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, R; Bednarek, D; Rudin, S
2015-06-15
Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less
Simulated laser fluorosensor signals from subsurface chlorophyll distributions
NASA Technical Reports Server (NTRS)
Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.
1986-01-01
A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.
Anesthesiologists' perceptions of minimum acceptable work habits of nurse anesthetists.
Logvinov, Ilana I; Dexter, Franklin; Hindman, Bradley J; Brull, Sorin J
2017-05-01
Work habits are non-technical skills that are an important part of job performance. Although non-technical skills are usually evaluated on a relative basis (i.e., "grading on a curve"), validity of evaluation on an absolute basis (i.e., "minimum passing score") needs to be determined. Survey and observational study. None. None. The theme of "work habits" was assessed using a modification of Dannefer et al.'s 6-item scale, with scores ranging from 1 (lowest performance) to 5 (highest performance). E-mail invitations were sent to all consultant and fellow anesthesiologists at Mayo Clinic in Florida, Arizona, and Minnesota. Because work habits expectations can be generational, the survey was designed for adjustment based on all invited (responding or non-responding) anesthesiologists' year of graduation from residency. The overall mean±standard deviation of the score for anesthesiologists' minimum expectations of nurse anesthetists' work habits was 3.64±0.66 (N=48). Minimum acceptable scores were correlated with the year of graduation from anesthesia residency (linear regression P=0.004). Adjusting for survey non-response using all N=207 anesthesiologists, the mean of the minimum acceptable work habits adjusted for year of graduation was 3.69 (standard error 0.02). The minimum expectations for nurse anesthetists' work habits were compared with observational data obtained from the University of Iowa. Among 8940 individual nurse anesthetist work habits scores, only 2.6% were <3.69. All N=65 of the Iowa nurse anesthetists' mean work habits scores were significantly greater than the Mayo estimate (3.69) for the minimum expectations; all P<0.00024. Our results suggest that routinely evaluated work habits of nurse anesthetists within departments should not be compared with an appropriate minimum score (i.e., of 3.69). Instead, work habits scores should be analyzed based on relative reporting among anesthetists. Copyright © 2017 Elsevier Inc. All rights reserved.
Osei, Ernest; Barnett, Rob
2015-01-01
The aim of this study is to provide guidelines for the selection of external‐beam radiation therapy target margins to compensate for target motion in the lung during treatment planning. A convolution model was employed to predict the effect of target motion on the delivered dose distribution. The accuracy of the model was confirmed with radiochromic film measurements in both static and dynamic phantom modes. 502 unique patient breathing traces were recorded and used to simulate the effect of target motion on a dose distribution. A 1D probability density function (PDF) representing the position of the target throughout the breathing cycle was generated from each breathing trace obtained during 4D CT. Changes in the target D95 (the minimum dose received by 95% of the treatment target) due to target motion were analyzed and shown to correlate with the standard deviation of the PDF. Furthermore, the amount of target D95 recovered per millimeter of increased field width was also shown to correlate with the standard deviation of the PDF. The sensitivity of changes in dose coverage with respect to target size was also determined. Margin selection recommendations that can be used to compensate for loss of target D95 were generated based on the simulation results. These results are discussed in the context of clinical plans. We conclude that, for PDF standard deviations less than 0.4 cm with target sizes greater than 5 cm, little or no additional margins are required. Targets which are smaller than 5 cm with PDF standard deviations larger than 0.4 cm are most susceptible to loss of coverage. The largest additional required margin in this study was determined to be 8 mm. PACS numbers: 87.53.Bn, 87.53.Kn, 87.55.D‐, 87.55.Gh
Zhu, Y Q; Long, Q; Xiao, Q F; Zhang, M; Wei, Y L; Jiang, H; Tang, B
2018-03-13
Objective: To investigate the association of blood pressure variability and sleep stability in essential hypertensive patients with sleep disorder by cardiopulmonary coupling. Methods: Performed according to strict inclusion and exclusion criteria, 88 new cases of essential hypertension who came from the international department and the cardiology department of china-japan friendship hospital were enrolled. Sleep stability and 24 h ambulatory blood pressure data were collected by the portable sleep monitor based on cardiopulmonary coupling technique and 24 h ambulatory blood pressure monitor. Analysis the correlation of blood pressure variability and sleep stability. Results: In the nighttime, systolic blood pressure standard deviation, systolic blood pressure variation coefficient, the ratio of the systolic blood pressure minimum to the maximum, diastolic blood pressure standard deviation, diastolic blood pressure variation coefficient were positively correlated with unstable sleep duration ( r =0.185, 0.24, 0.237, 0.43, 0.276, P <0.05). Conclusions: Blood pressure variability is associated with sleep stability, especially at night, the longer the unstable sleep duration, the greater the variability in night blood pressure.
Planned delayed relaxing retinotomy for proliferative vitreoretinopathy.
Williamson, Tom H; Gupta, Bhaskar
2010-01-01
A program involving three operations-the first to reattach most of the retina under silicone oil, the second to reattach the remaining retina by planned delayed relaxing retinectomy (PDRR), and the third to remove silicone oil-was tested. Review of electronic records of patients receiving PDRR for proliferative vitreoretinopathy (PVR). The primary end point was reattached retina without silicone oil. Eighty-seven patients had PVR and 27 received PDRR (mean age: 66.6 years; mean follow-up: 2.3 years). Ten patients had grade B PVR, 8 had CP1 to CP6, and 7 had CA2 to CA6. Twenty-four (89%) patients achieved a reattached retina without silicone oil. Mean logarithm of the minimum angle of resolution visual acuities were 1.41 (standard deviation = 0.67) at presentation and 1.21 (standard deviation = 0.58) at final follow-up. Four patients had glaucoma and 1 had scleromalacia. The overall success rate for all patients with PVR was 85% reattached retina without oil tamponade. PDRR contributes to a high chance of reattached retina and oil removal in PVR. Copyright 2010, SLACK Incorporated.
High-precision temperature control and stabilization using a cryocooler.
Hasegawa, Yasuhiro; Nakamura, Daiki; Murata, Masayuki; Yamamoto, Hiroya; Komine, Takashi
2010-09-01
We describe a method for precisely controlling temperature using a Gifford-McMahon (GM) cryocooler that involves inserting fiber-reinforced-plastic dampers into a conventional cryosystem. Temperature fluctuations in a GM cryocooler without a large heat bath or a stainless-steel damper at 4.2 K are typically of the order of 200 mK. It is particularly difficult to control the temperature of a GM cryocooler at low temperatures. The fiber-reinforced-plastic dampers enabled us to dramatically reduce temperature fluctuations at low temperatures. A standard deviation of the temperature fluctuations of 0.21 mK could be achieved when the temperature was controlled at 4.200 0 K using a feedback temperature control system with two heaters. Adding the dampers increased the minimum achievable temperature from 3.2 to 3.3 K. Precise temperature control between 4.200 0 and 300.000 K was attained using the GM cryocooler, and the standard deviation of the temperature fluctuations was less than 1.2 mK even at 300 K. This technique makes it possible to control and stabilize the temperature using a GM cryocooler.
Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.
Baugh, Tiffany P; Franzese, Christine B
2017-06-01
Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Moore, C.R.
1989-01-01
This report presents physical, chemical, and biological data collected at 50 sampling sites on selected streams in Chester County, Pennsylvania from 1969 to 1980. The physical data consist of air and water temperature, stream discharge, suspended sediment, pH, specific conductance, and dissolved oxygen. The chemical data consist of laboratory determinations of total nutrients, major ions, and trace metals. The biological data consist of total coliform, fecal coliform, and fecal streptococcus bacteriological analyses, and benthicmacroinvertebrate population analyses. Brillouin's diversity index, maximum diversity, minimum diversity, and evenness for each sample, and median and mean Brilloiuin's diversity index, standard deviation, and standard error of the mean were calculated for the benthic-macroinvertebrate data for each site.
Measuring (subglacial) bedform orientation, length, and longitudinal asymmetry - Method assessment.
Jorge, Marco G; Brennand, Tracy A
2017-01-01
Geospatial analysis software provides a range of tools that can be used to measure landform morphometry. Often, a metric can be computed with different techniques that may give different results. This study is an assessment of 5 different methods for measuring longitudinal, or streamlined, subglacial bedform morphometry: orientation, length and longitudinal asymmetry, all of which require defining a longitudinal axis. The methods use the standard deviational ellipse (not previously applied in this context), the longest straight line fitting inside the bedform footprint (2 approaches), the minimum-size footprint-bounding rectangle, and Euler's approximation. We assess how well these methods replicate morphometric data derived from a manually mapped (visually interpreted) longitudinal axis, which, though subjective, is the most typically used reference. A dataset of 100 subglacial bedforms covering the size and shape range of those in the Puget Lowland, Washington, USA is used. For bedforms with elongation > 5, deviations from the reference values are negligible for all methods but Euler's approximation (length). For bedforms with elongation < 5, most methods had small mean absolute error (MAE) and median absolute deviation (MAD) for all morphometrics and thus can be confidently used to characterize the central tendencies of their distributions. However, some methods are better than others. The least precise methods are the ones based on the longest straight line and Euler's approximation; using these for statistical dispersion analysis is discouraged. Because the standard deviational ellipse method is relatively shape invariant and closely replicates the reference values, it is the recommended method. Speculatively, this study may also apply to negative-relief, and fluvial and aeolian bedforms.
Refractive index and birefringence of 2H silicon carbide.
NASA Technical Reports Server (NTRS)
Powell, J. A.
1972-01-01
Measurement of the refractive indices of 2H SiC over the wavelength range from 435.8 to 650.9 nm by the method of minimum deviation. A curve fit of the experimental data to the Cauchy dispersion equation yielded, for the ordinary index, n sub zero = 2.5513 + 25,850/lambda squared + 8.928 x 10 to the 8th power/lambda to the 4th power and, for the extraordinary index, n sub e = 2.6161 + 28,230/lambda squared + 11.490 x 10 to the 8th power/lambda to the 4th power when lambda is expressed in nm. The estimated error (standard deviation) in these values is plus or minus 0.0006 for n sub zero and plus or minus 0.0009 for n sub e. The birefringence calculated from these expressions is about 20% less than previously published values.
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less
Houseknecht, D.W.; Bensley, D.F.; Hathon, L.A.; Kastens, P.H.
1993-01-01
Analysis and interpretation of dispersed vitrinite reflectance data in regions of high thermal maturity (> 2% vitrinite reflectance) have been equivocal partly because of an increase in width and complexity of reflectance histograms with increasing mean reflectance. Such complexity is illustrated by random reflectance (Rran) data from the Arkoma Basin that display a linear increase in standard deviation of Rran with an increase in mean Rran from 1 to 5%. Evaluating how much of the dispersion in these data is the result of vitrinite anisotropy and how much is the result of mixing of kerogen populations by sedimentary processes and/or sampling procedures has been problematic. Automated collection of reflectance data during polarizer rotation provides preliminary data for solution of this problem. Rotational reflectance data collected from a subset of Arkoma Basin samples reveal positive, linear relationships among maximum (R???max), random (Rran), rotational (Rrot), and minimum (R???min) reflectance, as well as a systematic increase in bireflectance (R???max-R???min) with increasing reflectance. R???max and Rrot display lower standard deviations and narrower, more nearly unimodal histograms than Rran and R???min, suggesting that R???max and Rrot are superior (less ambiguous) indices of thermal maturity. These data patterns are inferred to be mostly an indication of increasing vitrinite anisotropy with increasing thermal maturity, suggesting that the linear covariance observed between mean Rran and standard deviation in dispersed organic data sets from regions of high thermal maturity may be explained mostly as the result of increasing vitrinite anisotropy with increasing thermal maturity. ?? 1993.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-01
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692
Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image
NASA Astrophysics Data System (ADS)
Demir, N.; Kaynarca, M.; Oy, S.
2016-06-01
Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.
Sink fast and swim harder! Round-trip cost-of-transport for buoyant divers.
Miller, Patrick J O; Biuw, Martin; Watanabe, Yuuki Y; Thompson, Dave; Fedak, Mike A
2012-10-15
Efficient locomotion between prey resources at depth and oxygen at the surface is crucial for breath-hold divers to maximize time spent in the foraging layer, and thereby net energy intake rates. The body density of divers, which changes with body condition, determines the apparent weight (buoyancy) of divers, which may affect round-trip cost-of-transport (COT) between the surface and depth. We evaluated alternative predictions from external-work and actuator-disc theory of how non-neutral buoyancy affects round-trip COT to depth, and the minimum COT speed for steady-state vertical transit. Not surprisingly, the models predict that one-way COT decreases (increases) when buoyancy aids (hinders) one-way transit. At extreme deviations from neutral buoyancy, gliding at terminal velocity is the minimum COT strategy in the direction aided by buoyancy. In the transit direction hindered by buoyancy, the external-work model predicted that minimum COT speeds would not change at greater deviations from neutral buoyancy, but minimum COT speeds were predicted to increase under the actuator disc model. As previously documented for grey seals, we found that vertical transit rates of 36 elephant seals increased in both directions as body density deviated from neutral buoyancy, indicating that actuator disc theory may more closely predict the power requirements of divers affected by gravity than an external work model. For both models, minor deviations from neutral buoyancy did not affect minimum COT speed or round-trip COT itself. However, at body-density extremes, both models predict that savings in the aided direction do not fully offset the increased COT imposed by the greater thrusting required in the hindered direction.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Impulse damping control of an experimental structure
NASA Technical Reports Server (NTRS)
Redmond, J.; Meyer, J. L.; Silverberg, L.
1993-01-01
The characteristics associated with the fuel optimal control of a harmonic oscillator are extended to develop a near minimum fuel control algorithm for the vibration suppression of spacecraft. The operation of single level thrusters is regulated by recursive calculations of the standard deviations of displacement and velocity resulting in a bang-off-bang controller. A vertically suspended 16 ft cantilevered beam was used in the experiment. Results show that the structure's response was easily manipulated by minor alterations in the control law and the control system performance was not seriously degraded in the presence of multiple actuator failures.
7 CFR 400.204 - Notification of deviation from standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
Determination of antibacterial flomoxef in serum by capillary electrophoresis.
Kitahashi, Toshihiro; Furuta, Itaru
2003-04-01
A determination method of flomoxef (FMOX) concentration in serum by capillary electrophoresis is developed. Serum samples are extracted with acetonitrile. After pretreatment, they are separated in a fused-silica capillary tube with a 25 mM borate buffer (pH 10.0) as a running buffer that contains 50mM sodium dodecyl sulfate. The FMOX and acetaminophen (internal standard) are detected by UV absorbance at 200 nm. Linearity (0-200 mg/L) is good, and the minimum limit of detection is 1.0 mg/L (S/N = 3). The relative standard deviations of intra- and interassay variability are 1.60-4.78% and 2.10-3.31%, respectively, and the recovery rate is 84-98%. This method can be used for determination of FMOX concentration in serum.
Reliability and Minimum Detectable Change of the Gait Deviation Index (GDI) in post-stroke patients.
Correa, Katren Pedroso; Devetak, Gisele Francini; Martello, Suzane Ketlyn; de Almeida, Juliana Carla; Pauleto, Ana Carolina; Manffra, Elisangela Ferretti
2017-03-01
The Gait Deviation Index (GDI) is a summary measure that provides a global picture of gait kinematic data. Since the ability to walk is critical for post-stroke patients, the aim of this study was to determine the reliability and Minimum Detectable Change (MDC) of the GDI in this patient population. Twenty post-stroke patients (11 males, 9 females; mean age, 55.2±9.9years) participated in this study. Patients presented with either right- (n=14) or left-sided (n=6) hemiparesis. Kinematic gait data were collected in two sessions (test and retest) that were 2 to 7days apart. GDI values in the first and second sessions were, respectively, 59.0±8.1 and 60.2±9.4 for the paretic limb and 53.3±8.3 and 53.4±8.3 for the non-paretic limb. The reliability in each session was determined by the intra-class correlation coefficient (ICC) of three strides and, in the test session, their values were 0.91 and 0.97 for the paretic and non-paretic limbs, respectively. Between-session reliability and MDC were determined using the average GDI of three strides from each session. For the paretic limb, between-session ICC, standard error of measurement (SEM), and MDC were 0.84, 3.4 and 9.4, respectively. Non paretic lower limb exhibited between-session ICC, standard error of measurement (SEM), and MDC of 0.89, 2.7 and 7.5, respectively. These MDC values indicate that very large changes in GDI are required to identify gait improvement. Therefore, the clinical usefulness of GDI with stroke patients is questionable. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nåvik, Petter; Rønnquist, Anders; Stichel, Sebastian
2017-09-01
The contact force between the pantograph and the contact wire ensures energy transfer between the two. Too small of a force leads to arching and unstable energy transfer, while too large of a force leads to unnecessary wear on both parts. Thus, obtaining the correct contact force is important for both field measurements and estimates using numerical analysis. The field contact force time series is derived from measurements performed by a self-propelled diagnostic vehicle containing overhead line recording equipment. The measurements are not sampled at the actual contact surface of the interaction but by force transducers beneath the collector strips. Methods exist for obtaining more realistic measurements by adding inertia and aerodynamic effects to the measurements. The variation in predicting the pantograph-catenary interaction contact force is studied in this paper by evaluating the effect of the force sampling location and the effects of signal processing such as filtering. A numerical model validated by field measurements is used to study these effects. First, this paper shows that the numerical model can reproduce a train passage with high accuracy. Second, this study introduces three different options for contact force predictions from numerical simulations. Third, this paper demonstrates that the standard deviation and the maximum and minimum values of the contact force are sensitive to a low-pass filter. For a specific case, an 80 Hz cut-off frequency is compared to a 20 Hz cut-off frequency, as required by EN 50317:2012; the results show an 11% increase in standard deviation, a 36% increase in the maximum value and a 19% decrease in the minimum value.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
7 CFR 400.174 - Notification of deviation from financial standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł
2014-01-01
Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411
A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.
2011-11-02
Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Sigma Routing Metric for RPL Protocol.
Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian
2018-04-21
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.
Sigma Routing Metric for RPL Protocol
Rojas, Aldo; Fernandez, Luis
2018-01-01
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524
1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
EARLY, LATE OR NEVER? WHEN DOES PARENTAL EDUCATION IMPACT CHILD OUTCOMES?
Dickson, Matt; Gregg, Paul; Robinson, Harriet
2017-01-01
We estimate the causal effect of parents’ education on their children’s education and examine the timing of the impact. We identify the causal effect by exploiting the exogenous shift in (parents’) education levels induced by the 1972 minimum school leaving age reform in England. Increasing parental education has a positive causal effect on children’s outcomes that is evident in preschool assessments at age 4 and continues to be visible up to and including high-stakes examinations taken at age 16. Children of parents affected by the reform attain results around 0.1 standard deviations higher than those whose parents were not impacted. PMID:28736454
Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide
1981-02-01
SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
Oshorov, A V; Popugaev, K A; Savin, I A; Potapov, A A
2016-01-01
"Standard" assessment of ICP by measuring liquor ventricular pressure recently questioned. THE OBJECTIVE OF THE STUDY: Compare the values of ventricular and parenchymal ICP against the closure of open liquor drainage and during active CSF drainage. Examined 7 patients with TBI and intracranial hypertension syndrome, GCS 5.6 ± 1.2 points, 4.2 ± age 33 years. Compared parenchymal and ventricular ICP in three time periods: 1--during closure of ventricular drainage, 2--during of the open drains and drainage at the level of 14-15 mmHg, 3--during the period of active drainage. When comparing two methods of measurement used Bland-Altman method. 1. During time period of the closed drainage correlation coefficient was r = 0.83, p < 0.001. Bland-Altman method: the difference of the two measurements is equal to the minimum and 0.7 mm Hg, the standard deviation of 2.02 mm Hg 2. During time period of the open drainage was reduction of the correlation coefficient to r = 0.46, p < 0.01. Bland-Altman method: an increase in the difference of the two measurements to -0.84 mmHg, standard deviation 2.8 mm Hg 3. During time period of the active drainage of cerebrospinal fluid was marked difference between methods of measurement. Bland-Altman method: the difference was 8.64 mm Hg, and a standard deviation of 2.6 mm Hg. 1. During the closure of the ventricular drainage were good correlation between ventricular and parenchymal ICR 2. During open the liquor drainage correlation between the two methods of measuring the intracranial pressure is reduced. 3. During the active CSF drainage correlation between the two methods of measuring intracranial pressure can be completely lost. Under these conditions, CSF pressure is not correctly reflect the ICP 4. For an accurate and continuous measurement of intracranial pressure on the background of the active CSF drainage should be carried out simultaneous parenchymal ICP measurement.
Li, Boyan; Ou, Longwen; Dang, Qi; Meyer, Pimphan; Jones, Susanne; Brown, Robert; Wright, Mark
2015-11-01
This study evaluates the techno-economic uncertainty in cost estimates for two emerging technologies for biofuel production: in situ and ex situ catalytic pyrolysis. The probability distributions for the minimum fuel-selling price (MFSP) indicate that in situ catalytic pyrolysis has an expected MFSP of $1.11 per liter with a standard deviation of 0.29, while the ex situ catalytic pyrolysis has a similar MFSP with a smaller deviation ($1.13 per liter and 0.21 respectively). These results suggest that a biorefinery based on ex situ catalytic pyrolysis could have a lower techno-economic uncertainty than in situ pyrolysis compensating for a slightly higher MFSP cost estimate. Analysis of how each parameter affects the NPV indicates that internal rate of return, feedstock price, total project investment, electricity price, biochar yield and bio-oil yield are parameters which have substantial impact on the MFSP for both in situ and ex situ catalytic pyrolysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Du, Yiping P; Jin, Zhaoyang
2009-10-01
To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Nielsen, R. L.; Ghiorso, M. S.; Trischman, T.
2015-12-01
The database traceDs is designed to provide a transparent and accessible resource of experimental partitioning data. It now includes ~ 90% of all the experimental trace element partitioning data (~4000 experiments) produced over the past 45 years, and is accessible through a web based interface (using the portal lepr.ofm-research.org). We set a minimum standard for inclusion, with the threshold criteria being the inclusion of: Experimental conditions (temperature, pressure, device, container, time, etc.) Major element composition of the phases Trace element analyses of the phases Data sources that did not report these minimum components were not included. The rationale for not including such data is that the degree of equilibration is unknown, and more important, no rigorous approach to modeling the behavior of trace elements is possible without knowledge of composition of the phases, and the temperature and pressure of formation/equilibration. The data are stored using a schema derived from that of the Library of Experimental Phase Relations (LEPR), modified to account for additional metadata, and restructured to permit multiple analytical entries for various element/technique/standard combinations. In the process of populating the database, we have learned a number of things about the existing published experimental partitioning data. Most important are: ~ 20% of the papers do not satisfy one or more of the threshold criteria. The standard format for presenting data is the average. This was developed as the standard during the time where there were space constraints for publication in spite of fact that all the information can now be published as electronic supplements. The uncertainties that are published with the compositional data are often not adequately explained (e.g. 1 or 2 sigma, standard deviation of the average, etc.). We propose a new set of publication standards for experimental data that include the minimum criteria described above, the publication of all analyses with error based on peak count rates and background, plus information on the structural state of the mineral (e.g. orthopyroxene vs. pigeonite).
On the variation of the Nimbus 7 total solar irradiance
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1992-01-01
For the interval December 1978 to April 1991, the value of the mean total solar irradiance, as measured by the Nimbus-7 Earth Radiation Budget Experiment channel 10C, was 1,372.02 Wm(exp -2), having a standard deviation of 0.65 Wm(exp -2), a coefficient of variation (mean divided by the standard deviation) of 0.047 percent, and a normal deviate z (a measure of the randomness of the data) of -8.019 (inferring a highly significant non-random variation in the solar irradiance measurements, presumably related to the action of the solar cycle). Comparison of the 12-month moving average (also called the 13-month running mean) of solar irradiance to those of the usual descriptors of the solar cycle (i.e., sunspot number, 10.7-cm solar radio flux, and total corrected sunspot area) suggests possibly significant temporal differences. For example, solar irradiance is found to have been greatest on or before mid 1979 (leading solar maximum for cycle 21), lowest in early 1987 (lagging solar minimum for cycle 22), and was rising again through late 1990 (thus, lagging solar maximum for cycle 22), having last reported values below those that were seen in 1979 (even though cycles 21 and 22 were of comparable strength). Presuming a genuine correlation between solar irradiance and the solar cycle (in particular, sunspot number) one infers that the correlation is weak (having a coefficient of correlation r less than 0.84) and that major excursions (both as 'excesses' and 'deficits') have occurred (about every 2 to 3 years, perhaps suggesting a pulsating Sun).
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...
2016-10-18
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie
In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
5 CFR 551.601 - Minimum age standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Minimum age standards. 551.601 Section... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year minimum age. The Act, in section 3(l), sets a general 16-year minimum age, which applies to all employment...
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
CONSTRAINTS ON HYBRID METRIC-PALATINI GRAVITY FROM BACKGROUND EVOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lima, N. A.; Barreto, V. S., E-mail: ndal@roe.ac.uk, E-mail: vsm@roe.ac.uk
2016-02-20
In this work, we introduce two models of the hybrid metric-Palatini theory of gravitation. We explore their background evolution, showing explicitly that one recovers standard General Relativity with an effective cosmological constant at late times. This happens because the Palatini Ricci scalar evolves toward and asymptotically settles at the minimum of its effective potential during cosmological evolution. We then use a combination of cosmic microwave background, supernovae, and baryonic accoustic oscillations background data to constrain the models’ free parameters. For both models, we are able to constrain the maximum deviation from the gravitational constant G one can have at earlymore » times to be around 1%.« less
Binding, N; Schilder, K; Czeschinski, P A; Witting, U
1998-08-01
The 2,4-dinitrophenylhydrazine (2,4-DNPH) derivatization method mainly used for the determination of airborne formaldehyde was extended for acetaldehyde, acetone, 2-butanone, and cyclohexanone, the next four carbonyl compounds of industrial importance. Sampling devices and sampling conditions were adjusted for the respective limit value regulations. Analytical reliability criteria were established and compared to those of other recommended methods. With a minimum analytical range from one tenth to the 3-fold limit value in all cases and with relative standard deviations below 5%, the adjusted method meets all requirements for the reliable quantification of the four compounds in workplace air as well as in ambient air.
NASA Astrophysics Data System (ADS)
Kim, Byung Chan; Park, Seong-Ook
In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.
Novel Approach to Analyzing MFE of Noncoding RNA Sequences
George, Tina P.; Thomas, Tessamma
2016-01-01
Genomic studies have become noncoding RNA (ncRNA) centric after the study of different genomes provided enormous information on ncRNA over the past decades. The function of ncRNA is decided by its secondary structure, and across organisms, the secondary structure is more conserved than the sequence itself. In this study, the optimal secondary structure or the minimum free energy (MFE) structure of ncRNA was found based on the thermodynamic nearest neighbor model. MFE of over 2600 ncRNA sequences was analyzed in view of its signal properties. Mathematical models linking MFE to the signal properties were found for each of the four classes of ncRNA analyzed. MFE values computed with the proposed models were in concordance with those obtained with the standard web servers. A total of 95% of the sequences analyzed had deviation of MFE values within ±15% relative to those obtained from standard web servers. PMID:27695341
Novel Approach to Analyzing MFE of Noncoding RNA Sequences.
George, Tina P; Thomas, Tessamma
2016-01-01
Genomic studies have become noncoding RNA (ncRNA) centric after the study of different genomes provided enormous information on ncRNA over the past decades. The function of ncRNA is decided by its secondary structure, and across organisms, the secondary structure is more conserved than the sequence itself. In this study, the optimal secondary structure or the minimum free energy (MFE) structure of ncRNA was found based on the thermodynamic nearest neighbor model. MFE of over 2600 ncRNA sequences was analyzed in view of its signal properties. Mathematical models linking MFE to the signal properties were found for each of the four classes of ncRNA analyzed. MFE values computed with the proposed models were in concordance with those obtained with the standard web servers. A total of 95% of the sequences analyzed had deviation of MFE values within ±15% relative to those obtained from standard web servers.
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-11-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-09-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
5 CFR 890.202 - Minimum standards for health benefits carriers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.202 Minimum standards for health benefits carriers. The minimum standards for health benefits carriers for the...
5 CFR 890.202 - Minimum standards for health benefits carriers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.202 Minimum standards for health benefits carriers. The minimum standards for health benefits carriers for the...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147 meeting: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-26
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-19
... Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Committee 147 meeting: Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance... RTCA Special Committee 147: Minimum Operational Performance Standards for Traffic Alert and Collision...
Analysis of Global Urban Temperature Trends and Urbanization Impacts
NASA Astrophysics Data System (ADS)
Lee, K. I.; Ryu, J.; Jeon, S. W.
2018-04-01
Due to urbanization, urban areas are shrinking green spaces and increasing concrete, asphalt pavement. So urban climates are different from non-urban areas. In addition, long-term macroscopic studies of urban climate change are becoming more important as global urbanization affects global warming. To do this, it is necessary to analyze the effect of urbanization on the temporal change in urban temperature with the same temperature data and standards for urban areas around the world. In this study, time series analysis was performed with the maximum, minimum, mean and standard values of surface temperature during the from 1980 to 2010 and analyzed the effect of urbanization through linear regression analysis with variables (population, night light, NDVI, urban area). As a result, the minimum value of the surface temperature of the urban area reflects an increase by a rate of 0.28K decade-1 over the past 31 years, the maximum value reflects an increase by a rate of 0.372K decade-1, the mean value reflects an increase by a rate of 0.208 decade-1, and the standard deviation reflects a decrease by rate of 0.023K decade-1. And the change of surface temperature in urban areas is affected by urbanization related to land cover such as decrease of greenery and increase of pavement area, but socioeconomic variables are less influential than NDVI in this study. This study are expected to provide an approach to future research and policy-planning for urban temperature change and urbanization impacts.
Thomas-Gibson, Siwan; Bugajski, Marek; Bretthauer, Michael; Rees, Colin J; Dekker, Evelien; Hoff, Geir; Jover, Rodrigo; Suchanek, Stepan; Ferlitsch, Monika; Anderson, John; Roesch, Thomas; Hultcranz, Rolf; Racz, Istvan; Kuipers, Ernst J; Garborg, Kjetil; East, James E; Rupinski, Maciej; Seip, Birgitte; Bennett, Cathy; Senore, Carlo; Minozzi, Silvia; Bisschops, Raf; Domagk, Dirk; Valori, Roland; Spada, Cristiano; Hassan, Cesare; Dinis-Ribeiro, Mario; Rutter, Matthew D
2017-01-01
The European Society of Gastrointestinal Endoscopy and United European Gastroenterology present a short list of key performance measures for lower gastrointestinal endoscopy. We recommend that endoscopy services across Europe adopt the following seven key performance measures for lower gastrointestinal endoscopy for measurement and evaluation in daily practice at a center and endoscopist level: 1 rate of adequate bowel preparation (minimum standard 90%); 2 cecal intubation rate (minimum standard 90%); 3 adenoma detection rate (minimum standard 25%); 4 appropriate polypectomy technique (minimum standard 80%); 5 complication rate (minimum standard not set); 6 patient experience (minimum standard not set); 7 appropriate post-polypectomy surveillance recommendations (minimum standard not set). Other identified performance measures have been listed as less relevant based on an assessment of their importance, scientific acceptability, feasibility, usability, and comparison to competing measures. PMID:28507745
24 CFR 200.933 - Changes in minimum property standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Register. As the changes are made, they will be incorporated into the volumes of the Minimum Property... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Changes in minimum property... Changes in minimum property standards. Changes in the Minimum Property Standards will generally be made...
24 CFR 200.933 - Changes in minimum property standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Register. As the changes are made, they will be incorporated into the volumes of the Minimum Property... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Changes in minimum property... Changes in minimum property standards. Changes in the Minimum Property Standards will generally be made...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... Information Collection: Comment Request; Minimum Property Standards for Multifamily and Care-Type Facilities...: Minimum Property Standards for Multifamily and Care-type facilities. OMB Control Number, if applicable... Housing and Urban Development (HUD) developed the Minimum Property Standards (MPS) program in order to...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 8 2013-01-01 2013-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 8 2014-01-01 2014-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
7 CFR 953.43 - Minimum standards of quality.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 8 2012-01-01 2012-01-01 false Minimum standards of quality. 953.43 Section 953.43... SOUTHEASTERN STATES Order Regulating Handling Regulations § 953.43 Minimum standards of quality. (a) Recommendation. Whenever the committee deems it advisable to establish and maintain minimum standards of quality...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard V-Minimum academic programs/school calendar. 36... ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum Program of Instruction § 36.20 Standard V—Minimum academic programs/school calendar. (a...
NASA Astrophysics Data System (ADS)
Chen, Wo-Hsing; Sanghvi, Narendra T.; Carlson, Roy; Uchida, Toyoaki
2011-09-01
Sonablate® 500 (SB-500) HIFU devices have been successfully used to treat prostate cancer non-invasively. In addition, Visually Directed HIFU with the SB-500 has demonstrated higher efficacy. Visually Directed HIFU works by displaying hyperechoic changes on the B-mode ultrasound images. However, small changes in the grey-scale images are not detectable by Visually Directed HIFU. To detect all tissue changes reliably, the SB-500 was enhanced with quantitative, real-time Tissue Change Monitoring (TCM) software. TCM uses pulse-echo ultrasound backscattered RF signals in 2D to estimate changes in the tissue properties caused by HIFU. The RF signal energy difference is calculated in selected frequency bands (pre and post HIFU) for each treatment site. The results are overlaid on the real-time ultrasound image in green, yellow and orange to represent low, medium and high degree of change in backscattered energy levels. The color mapping scheme was derived on measured temperature and backscattered RF signals from in vitro chicken tissue experiments. The TCM software was installed and tested in a clinical device to obtain human RF data. Post HIFU contrast enhanced MRI scans verified necrotic regions of the prostate. The color mapping success rate at higher HIFU power levels was 94% in the initial clinical test. Based on these results, TCM software has been released for wider usage. The clinical studies with TCM in Japan and The Bahamas have provided the following PSA (ng/ml) results. Japan (n = 97), PSA pre-treatment/post-treatment; minimum 0.7/0.0, maximum 76.0/4.73, median 6.89/0.07, standard deviation 11.19/0.62. The Bahamas (n = 59), minimum 0.4/0.0, maximum 13.0/1.4, median 4.7/0.1, standard deviation 2.8/0.3.
PTV margin determination in conformal SRT of intracranial lesions
Parker, Brent C.; Shiu, Almon S.; Maor, Moshe H.; Lang, Frederick F.; Liu, H. Helen; White, R. Allen; Antolak, John A.
2002-01-01
The planning target volume (PTV) includes the clinical target volume (CTV) to be irradiated and a margin to account for uncertainties in the treatment process. Uncertainties in miniature multileaf collimator (mMLC) leaf positioning, CT scanner spatial localization, CT‐MRI image fusion spatial localization, and Gill‐Thomas‐Cosman (GTC) relocatable head frame repositioning were quantified for the purpose of determining a minimum PTV margin that still delivers a satisfactory CTV dose. The measured uncertainties were then incorporated into a simple Monte Carlo calculation for evaluation of various margin and fraction combinations. Satisfactory CTV dosimetric criteria were selected to be a minimum CTV dose of 95% of the PTV dose and at least 95% of the CTV receiving 100% of the PTV dose. The measured uncertainties were assumed to be Gaussian distributions. Systematic errors were added linearly and random errors were added in quadrature assuming no correlation to arrive at the total combined error. The Monte Carlo simulation written for this work examined the distribution of cumulative dose volume histograms for a large patient population using various margin and fraction combinations to determine the smallest margin required to meet the established criteria. The program examined 5 and 30 fraction treatments, since those are the only fractionation schemes currently used at our institution. The fractionation schemes were evaluated using no margin, a margin of just the systematic component of the total uncertainty, and a margin of the systematic component plus one standard deviation of the total uncertainty. It was concluded that (i) a margin of the systematic error plus one standard deviation of the total uncertainty is the smallest PTV margin necessary to achieve the established CTV dose criteria, and (ii) it is necessary to determine the uncertainties introduced by the specific equipment and procedures used at each institution since the uncertainties may vary among locations. PACS number(s): 87.53.Kn, 87.53.Ly PMID:12132939
Gerdes, Lars; Busch, Ulrich; Pecoraro, Sven
2014-12-14
According to Regulation (EU) No 619/2011, trace amounts of non-authorised genetically modified organisms (GMO) in feed are tolerated within the EU if certain prerequisites are met. Tolerable traces must not exceed the so-called 'minimum required performance limit' (MRPL), which was defined according to the mentioned regulation to correspond to 0.1% mass fraction per ingredient. Therefore, not yet authorised GMO (and some GMO whose approvals have expired) have to be quantified at very low level following the qualitative detection in genomic DNA extracted from feed samples. As the results of quantitative analysis can imply severe legal and financial consequences for producers or distributors of feed, the quantification results need to be utterly reliable. We developed a statistical approach to investigate the experimental measurement variability within one 96-well PCR plate. This approach visualises the frequency distribution as zygosity-corrected relative content of genetically modified material resulting from different combinations of transgene and reference gene Cq values. One application of it is the simulation of the consequences of varying parameters on measurement results. Parameters could be for example replicate numbers or baseline and threshold settings, measurement results could be for example median (class) and relative standard deviation (RSD). All calculations can be done using the built-in functions of Excel without any need for programming. The developed Excel spreadsheets are available (see section 'Availability of supporting data' for details). In most cases, the combination of four PCR replicates for each of the two DNA isolations already resulted in a relative standard deviation of 15% or less. The aims of the study are scientifically based suggestions for minimisation of uncertainty of measurement especially in -but not limited to- the field of GMO quantification at low concentration levels. Four PCR replicates for each of the two DNA isolations seem to be a reasonable minimum number to narrow down the possible spread of results.
Three Essays In and Tests of Theoretical Urban Economics
NASA Astrophysics Data System (ADS)
Zhao, Weihua
This dissertation consists of three essays on urban economics. The three essays are related to urban spatial structure change, energy consumption, greenhouse gas emissions, and housing redevelopment. Chapter 1 answers the question: Does the classic Standard Urban Model still describe the growth of cities? Chapter 2 derives the implications of telework on urban spatial structure, energy consumption, and greenhouse gas emissions. Chapter 3 investigates the long run effects of minimum lot size zoning on neighborhood redevelopment. Chapter 1 identifies a new implication of the classic Standard Urban Model, the "unitary elasticity property (UEP)", which is the sum of the elasticity of central density and the elasticity of land area with respect to population change is approximately equal to unity. When this implication of the SUM is tested, it fits US cities fairly well. Further analysis demonstrates that topographic barriers and age of housing stock are the key factors explaining deviation from the UEP. Chapter 2 develops a numerical urban simulation model with households that are able to telework to investigate the urban form, congestion, energy consumption and greenhouse gas emission implications of telework. Simulation results suggest that by reducing transportation costs, telework causes sprawl, with associated longer commutes and consumption of larger homes, both of which increase energy consumption. Overall effects depend on who captures the gains from telework (workers versus firms), urban land use regulation such as height limits or greenbelts, and the fraction of workers participating in telework. The net effects of telework on energy use and GHG emissions are generally negligible. Chapter 3 applies dynamic programming to investigate the long run effects of minimum lot size zoning on neighborhood redevelopment. With numerical simulation, comparative dynamic results show that minimum lot size zoning can delay initial land conversion and slow down demolition and housing redevelopment. Initially, minimum lot size zoning is not binding. However, as city grows, it becomes binding and can effectively distort housing supply. It can lower both floor area ratio and residential density, and reduce aggregate housing supply. Overall, minimum lot size zoning can stabilize the path of structure/land ratios, housing service levels, structure density, and housing prices. In addition, minimum lot size zoning provides more incentive for developer to maintain the building, slow structure deterioration, and raise the minimum level of housing services provided over the life cycle of development.
A comparative appraisal of two equivalence tests for multiple standardized effects.
Shieh, Gwowen
2016-04-01
Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
Wisneski, Kimberly J; Johnson, Michelle J
2007-03-23
Robotic therapy is at the forefront of stroke rehabilitation. The Activities of Daily Living Exercise Robot (ADLER) was developed to improve carryover of gains after training by combining the benefits of Activities of Daily Living (ADL) training (motivation and functional task practice with real objects), with the benefits of robot mediated therapy (repeatability and reliability). In combining these two therapy techniques, we seek to develop a new model for trajectory generation that will support functional movements to real objects during robot training. We studied natural movements to real objects and report on how initial reaching movements are affected by real objects and how these movements deviate from the straight line paths predicted by the minimum jerk model, typically used to generate trajectories in robot training environments. We highlight key issues that to be considered in modelling natural trajectories. Movement data was collected as eight normal subjects completed ADLs such as drinking and eating. Three conditions were considered: object absent, imagined, and present. This data was compared to predicted trajectories generated from implementing the minimum jerk model. The deviations in both the plane of the table (XY) and the sagittal plane of torso (XZ) were examined for both reaches to a cup and to a spoon. Velocity profiles and curvature were also quantified for all trajectories. We hypothesized that movements performed with functional task constraints and objects would deviate from the minimum jerk trajectory model more than those performed under imaginary or object absent conditions. Trajectory deviations from the predicted minimum jerk model for these reaches were shown to depend on three variables: object presence, object orientation, and plane of movement. When subjects completed the cup reach their movements were more curved than for the spoon reach. The object present condition for the cup reach showed more curvature than in the object imagined and absent conditions. Curvature in the XZ plane of movement was greater than curvature in the XY plane for all movements. The implemented minimum jerk trajectory model was not adequate for generating functional trajectories for these ADLs. The deviations caused by object affordance and functional task constraints must be accounted for in order to allow subjects to perform functional task training in robotic therapy environments. The major differences that we have highlighted include trajectory dependence on: object presence, object orientation, and the plane of movement. With the ability to practice ADLs on the ADLER environment we hope to provide patients with a therapy paradigm that will produce optimal results and recovery.
NASA Astrophysics Data System (ADS)
Viana, Liviany; Herdies, Dirceu; Muller, Gabriela
2017-04-01
An observational study was carried out to quantify the events of cold air outbreak moving above the Equator from 1980 to 2013 during the austral winter period (May, June, July, August and September), and later analyzed the behavior of the circulation responsible for this displacement. The observational datasets from the Sector of Climatological studies of the Institute of Airspace Control of the city of Iauarete (0.61N, 69.0W; 120m), located at the extreme northern of the Brazilian Amazon Basin, were used for the analyzes. The meteorological variables used were the temperatures minimum, maximum and maximum atmospheric pressure. A new methodology was used to identify these events, calculated by the difference between the monthly average and 2 (two) standard deviations for the extremes of the air temperature, and the sum of 1 (one) standard deviation for the maximum atmospheric pressure. As a result, a total of 11 cold events were recorded that reached the extreme northern of the Brazilian Amazon Basin, with values recorded at a minimum temperature of 17.8 °C, at the maximum temperature of 21.0 °C and maximum atmospheric pressure reaching 1021.2 hPa. These reductions and augmentation are equivalent to the negative anomalies of 5.9 and 8.7 °C at the minimum and maximum temperatures, respectively, while a positive anomaly of 7.1 hPa was observed at the maximum pressure. In relation to the dynamic behavior of large-scale circulation, a Rossby wave-type configuration propagating from west to east over subtropical latitudes was observed from the European Center for Medium-Range Weather Forecast (ECMWF) since the days before the arrival of the event in the city of Iauarete. This behavior was observed both in the anomalies of the gepotencial (250 hPa and 850 hPa) and in the southern component of the wind (250 hPa and 850 hPa), both presenting statistical significance of 99 % (Student's T test). Therefore, a new criterion for the identification of "friagens" in the tropical latitude has been able to represent the effects of colds air outbreak and the advancement of the cold air mass, which are subsidized by the large-scale circulation, and consequently contribute to the modifications in the weather and the life of the population over this Equatorial region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Boyan; Ou, Longwen; Dang, Qi
This study evaluates the techno-economic uncertainty in cost estimates for two emerging biorefinery technologies for biofuel production: in situ and ex situ catalytic pyrolysis. Stochastic simulations based on process and economic parameter distributions are applied to calculate biorefinery performance and production costs. The probability distributions for the minimum fuel-selling price (MFSP) indicate that in situ catalytic pyrolysis has an expected MFSP of $4.20 per gallon with a standard deviation of 1.15, while the ex situ catalytic pyrolysis has a similar MFSP with a smaller deviation ($4.27 per gallon and 0.79 respectively). These results suggest that a biorefinery based on exmore » situ catalytic pyrolysis could have a lower techno-economic risk than in situ pyrolysis despite a slightly higher MFSP cost estimate. Analysis of how each parameter affects the NPV indicates that internal rate of return, feedstock price, total project investment, electricity price, biochar yield and bio-oil yield are significant parameters which have substantial impact on the MFSP for both in situ and ex situ catalytic pyrolysis.« less
Detection of Epileptic Seizure Event and Onset Using EEG
Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul
2014-01-01
This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892
78 FR 21060 - Appeal Proceedings Before the Commission
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-09
... adoption of alternate standards from those required by the Commission's minimum internal control standards... adoption of alternate standards from those required by the Commission's minimum internal control standards... TGRAs' adoption of alternate standards from those required by the Commission's minimum internal control...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum standards for game artwork, glass... HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.16 What are the minimum standards for game artwork, glass, and rules? This section provides...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum standards for game artwork, glass... HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.16 What are the minimum standards for game artwork, glass, and rules? This section provides...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum standards for game artwork, glass... HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.16 What are the minimum standards for game artwork, glass, and rules? This section provides...
Assessing the optimality of ASHRAE climate zones using high resolution meteorological data sets
NASA Astrophysics Data System (ADS)
Fils, P. D.; Kumar, J.; Collier, N.; Hoffman, F. M.; Xu, M.; Forbes, W.
2017-12-01
Energy consumed by built infrastructure constitutes a significant fraction of the nation's energy budget. According to 2015 US Energy Information Agency report, 41% of the energy used in the US was going to residential and commercial buildings. Additional research has shown that 32% of commercial building energy goes into heating and cooling the building. The American National Standards Institute and the American Society of Heating Refrigerating and Air-Conditioning Engineers Standard 90.1 provides climate zones for current state-of-practice since heating and cooling demands are strongly influenced by spatio-temporal weather variations. For this reason, we have been assessing the optimality of the climate zones using high resolution daily climate data from NASA's DAYMET database. We analyzed time series of meteorological data sets for all ASHRAE climate zones between 1980-2016 inclusively. We computed the mean, standard deviation, and other statistics for a set of meteorological variables (solar radiation, maximum and minimum temperature)within each zone. By plotting all the zonal statistics, we analyzed patterns and trends in those data over the past 36 years. We compared the means of each zone to its standard deviation to determine the range of spatial variability that exist within each zone. If the band around the mean is too large, it indicates that regions in the zone experience a wide range of weather conditions and perhaps a common set of building design guidelines would lead to a non-optimal energy consumption scenario. In this study we have observed a strong variation in the different climate zones. Some have shown consistent patterns in the past 36 years, indicating that the zone was well constructed, while others have greatly deviated from their mean indicating that the zone needs to be reconstructed. We also looked at redesigning the climate zones based on high resolution climate data. We are using building simulations models like EnergyPlus to develop optimal energy guidelines for each climate zone and quantify potential energy savings that can be realized by redesigning climate zones using state-of-the art data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tatum, J.L.; Strash, A.M.; Sugerman, H.J.
Using a canine oleic acid model, a computerized gamma scintigraphic technique was evaluated to determine 1) ability to detect pulmonary capillary protein leak in a model temporally consistent with clinical adult respiratory distress syndrome (ARDS), 2) the possibility of providing a quantitative index of leak, and 3) the feasibility of closely spaced repeat evaluations. Study animals received oleic acid (controls, n . 10; 0.05 ml/kg, n . 10; 0.10 ml/kg, n . 12; 0.15 ml/kg, n . 6) 3 hours prior to a tracer dose of technetium-99m (/sup 99/mTc) HSA. One animal in each dose group also received two repeatmore » tracer injections spaced a minimum of 45 minutes apart. Digital images were obtained with a conventional gamma camera interfaced to a dedicated medical computer. Lung: heart ratio versus time curves were generated, and a slope index was calculated for each curve. Slope index values for all doses were significantly greater than control values (P(t) less than 0.0001). Each incremental dose increase was also significantly greater than the previous dose level. Oleic acid dose versus slope index fitted a linear regression model with r . 0.94. Repeat dosing produced index values with standard deviations less than the group sample standard deviations. We feel this technique may have application in the clinical study of pulmonary permeability edema.« less
Visualizing the Sample Standard Deviation
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
Global optimization of minority game by intelligent agents
NASA Astrophysics Data System (ADS)
Xie, Yan-Bo; Wang, Bing-Hong; Hu, Chin-Kun; Zhou, Tao
2005-10-01
We propose a new model of minority game with intelligent agents who use trail and error method to make a choice such that the standard deviation σ2 and the total loss in this model reach the theoretical minimum values in the long time limit and the global optimization of the system is reached. This suggests that the economic systems can self-organize into a highly optimized state by agents who make decisions based on inductive thinking, limited knowledge, and capabilities. When other kinds of agents are also present, the simulation results and analytic calculations show that the intelligent agent can gain profits from producers and are much more competent than the noise traders and conventional agents in original minority games proposed by Challet and Zhang.
Kawata, K; Ibaraki, T; Tanabe, A; Yagoh, H; Shinoda, A; Suzuki, H; Yasuhara, A
2001-03-09
Simple gas chromatographic-mass spectrometric determination of hydrophilic organic compounds in environmental water was developed. A cartridge containing activated carbon fiber felt was made by way of trial and was evaluated for solid-phase extraction of the compounds in water. The hydrophilic compounds investigated were acrylamide, N,N-dimethylacetamide, N,N-dimethylformamide, 1,4-dioxane, furfural, furfuryl alcohol, N-nitrosodiethylamine and N-nitrosodimethylamine. Overall recoveries were good (80-100%) from groundwater and river water. The relative standard deviations ranged from 4.5 to 16% for the target compounds. The minimum detectable concentrations were 0.02 to 0.03 microg/l. This method was successfully applied to several river water samples.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Combined experiment Phase 2 data characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.S.; Shipley, D.E.; Young, T.S.
1995-11-01
The National Renewable Energy Laboratory`s ``Combined Experiment`` has yielded a large quantity of experimental data on the operation of a downwind horizontal axis wind turbine under field conditions. To fully utilize this valuable resource and identify particular episodes of interest, a number of databases were created that characterize individual data events and rotational cycles over a wide range of parameters. Each of the 59 five-minute data episodes collected during Phase 11 of the Combined Experiment have been characterized by the mean, minimum, maximum, and standard deviation of all data channels, except the blade surface pressures. Inflow condition, aerodynamic force coefficient,more » and minimum leading edge pressure coefficient databases have also been established, characterizing each of nearly 21,000 blade rotational cycles. In addition, a number of tools have been developed for searching these databases for particular episodes of interest. Due to their extensive size, only a portion of the episode characterization databases are included in an appendix, and examples of the cycle characterization databases are given. The search tools are discussed and the FORTRAN or C code for each is included in appendices.« less
25 CFR 542.14 - What are the minimum internal control standards for the cage?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for the cage? 542.14 Section 542.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.14 What are the minimum internal control standards for the cage? (a) Computer applications. For...
25 CFR 543.8 - What are the minimum internal control standards for bingo?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for bingo? 543.8 Section 543.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.8 What are the minimum internal control standards for bingo? (a) Supervision....
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 543.8 - What are the minimum internal control standards for bingo?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for bingo? 543.8 Section 543.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.8 What are the minimum internal control standards for bingo? (a) Supervision....
25 CFR 542.17 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for complimentary services or items? 542.17 Section 542.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.17 What are the minimum internal control standards for complimentary...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
A New Control Paradigm for Stochastic Differential Equations
NASA Astrophysics Data System (ADS)
Schmid, Matthias J. A.
This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.
Down-Looking Interferometer Study II, Volume I,
1980-03-01
g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
Limited School Drinking Water Access for Youth.
Kenney, Erica L; Gortmaker, Steven L; Cohen, Juliana F W; Rimm, Eric B; Cradock, Angie L
2016-07-01
Providing children and youth with safe, adequate drinking water access during school is essential for health. This study used objectively measured data to investigate the extent to which schools provide drinking water access that meets state and federal policies. We visited 59 middle and high schools in Massachusetts during spring 2012. Trained research assistants documented the type, location, and working condition of all water access points throughout each school building using a standard protocol. School food service directors (FSDs) completed surveys reporting water access in cafeterias. We evaluated school compliance with state plumbing codes and federal regulations and compared FSD self-reports of water access with direct observation; data were analyzed in 2014. On average, each school had 1.5 (standard deviation: .6) water sources per 75 students; 82% (standard deviation: 20) were functioning and fewer (70%) were both clean and functioning. Less than half of the schools met the federal Healthy Hunger-Free Kids Act requirement for free water access during lunch; 18 schools (31%) provided bottled water for purchase but no free water. Slightly over half (59%) met the Massachusetts state plumbing code. FSDs overestimated free drinking water access compared to direct observation (96% FSD reported vs. 48% observed, kappa = .07, p = .17). School drinking water access may be limited. In this study, many schools did not meet state or federal policies for minimum student drinking water access. School administrative staff may not accurately report water access. Public health action is needed to increase school drinking water access. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.
Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
24 CFR 200.925a - Multifamily and care-type minimum property standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Standards § 200.925a Multifamily and care-type minimum property standards. (a) Construction standards. Multifamily or care-type properties shall comply with the minimum property standards contained in the handbook.... If the developer or other interested party so chooses, then the multifamily or care-type property...
NASA Technical Reports Server (NTRS)
Chiou, E. W.; Bhartia, P. K.; McPeters, R. D.; Loyola, D. G.; Coldewey-Egbers, M.; Fioletov, V. E.; Van Roozendael, M.; Spurr, R.; Lerot, C.; Frith, S. M.
2014-01-01
This paper describes the comparison of the variability of total column ozone inferred from the three independent multi-year data records, namely, (i) Solar Backscatter Ultraviolet Instrument (SBUV) v8.6 profile total ozone, (ii) GTO (GOME-type total ozone), and (iii) ground-based total ozone data records covering the 16-year overlap period (March 1996 through June 2011). Analyses are conducted based on area-weighted zonal means for 0-30degS, 0-30degN, 50-30degS, and 30-60degN. It has been found that, on average, the differences in monthly zonal mean total ozone vary between -0.3 and 0.8% and are well within 1 %. For GTO minus SBUV, the standard deviations and ranges (maximum minus minimum) of the differences regarding monthly zonal mean total ozone vary between 0.6-0.7% and 2.8-3.8% respectively, depending on the latitude band. The corresponding standard deviations and ranges regarding the differences in monthly zonal mean anomalies show values between 0.4-0.6% and 2.2-3.5 %. The standard deviations and ranges of the differences ground-based minus SBUV regarding both monthly zonal means and anomalies are larger by a factor of 1.4-2.9 in comparison to GTO minus SBUV. The ground-based zonal means demonstrate larger scattering of monthly data compared to satellite-based records. The differences in the scattering are significantly reduced if seasonal zonal averages are analyzed. The trends of the differences GTO minus SBUV and ground-based minus SBUV are found to vary between -0.04 and 0.1%/yr (-0.1 and 0.3DU/yr). These negligibly small trends have provided strong evidence that there are no significant time-dependent differences among these multiyear total ozone data records. Analyses of the annual deviations from pre-1980 level indicate that, for the 15-year period of 1996 to 2010, all three data records show a gradual increase at 30-60degN from -5% in 1996 to -2% in 2010. In contrast, at 50-30degS and 30degS- 30degN there has been a leveling off in the 15 years after 1996. The deviations inferred from GTO and SBUV show agreement within 1 %, but a slight increase has been found in the differences during the period 1996-2010.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... 228--Minimum Operational Performance Standards for Unmanned Aircraft Systems AGENCY: Federal Aviation...--Minimum Operational Performance Standards for Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 228--Minimum Operational Performance...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-25
... 228--Minimum Operational Performance Standards for Unmanned Aircraft Systems AGENCY: Federal Aviation...--Minimum Operational Performance Standards for Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 228--Minimum Operational Performance...
12 CFR 564.4 - Minimum appraisal standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Karvounis, Sotirios
2012-12-01
Technology transfer may take place in parallel with cooperative action between companies participating in the same organizational scheme or using one another as subcontractor (outsourcing). In this case, cooperation should be realized by means of Standard Methods and Recommended Practices (SRPs) to achieve (i) quality of intermediate/final products according to specifications and (ii) industrial process control as required to guarantee such quality with minimum deviation (corresponding to maximum reliability) from preset mean values of representative quality parameters. This work deals with the design of the network of SRPs needed in each case for successful cooperation, implying also the corresponding technology transfer, effectuated through a methodological framework developed in the form of an algorithmic procedure with 20 activity stages and 8 decision nodes. The functionality of this methodology is proved by presenting the path leading from (and relating) a standard test method for toluene, as petrochemical feedstock in the toluene diisocyanate production, to the (6 generations distance upstream) performance evaluation of industrial process control systems (ie., from ASTM D5606 to BS EN 61003-1:2004 in the SRPs network).
25 CFR 543.17 - What are the minimum internal control standards for drop and count?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for drop and count? 543.17 Section 543.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.17 What are the minimum internal control standards for drop and count?...
25 CFR 543.15 - What are the minimum internal control standards for lines of credit?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for lines of credit? 543.15 Section 543.15 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.15 What are the minimum internal control standards for lines of credi...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
25 CFR 543.9 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for pull tabs? 543.9 Section 543.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.9 What are the minimum internal control standards for pull tabs? (a)...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
25 CFR 543.13 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for complimentary services or items? 543.13 Section 543.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.13 What are the minimum internal control standards fo...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum internal control standards for internal audit for Tier A gaming operations? 542.22 Section 542.22 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are the minimum internal control standards for...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
25 CFR 543.9 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for pull tabs? 543.9 Section 543.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.9 What are the minimum internal control standards for pull tabs? (a)...
25 CFR 543.15 - What are the minimum internal control standards for lines of credit?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for lines of credit? 543.15 Section 543.15 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.15 What are the minimum internal control standards for lines of credi...
25 CFR 543.17 - What are the minimum internal control standards for drop and count?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for drop and count? 543.17 Section 543.17 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.17 What are the minimum internal control standards for drop and count?...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
25 CFR 543.13 - What are the minimum internal control standards for complimentary services or items?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for complimentary services or items? 543.13 Section 543.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.13 What are the minimum internal control standards fo...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for internal audit for Tier B gaming operations? 542.32 Section 542.32 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are the minimum internal control standards for...
77 FR 58707 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-21
... Gaming Commission 25 CFR Part 543 Minimum Internal Control Standards; Final Rule #0;#0;Federal Register... Control Standards AGENCY: National Indian Gaming Commission, Interior. ACTION: Final rule. SUMMARY: The National Indian Gaming Commission (NIGC) amends its minimum internal control standards for Class II gaming...
77 FR 43196 - Minimum Internal Control Standards and Technical Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-24
... NATIONAL INDIAN GAMING COMMISSION 25 CFR Parts 543 and 547 Minimum Internal Control Standards [email protected] . SUPPLEMENTARY INFORMATION: Part 543 addresses minimum internal control standards (MICS) for Class II gaming operations. The regulations require tribes to establish controls and implement...
78 FR 11793 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-20
... Internal Control Standards AGENCY: National Indian Gaming Commission. ACTION: Proposed rule. SUMMARY: The National Indian Gaming Commission (NIGC) proposes to amend its minimum internal control standards for Class... NIGC published a final rule in the Federal Register called Minimum Internal Control Standards. 64 FR...
SU-E-T-677: Reproducibility of Production of Ionization Chambers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kukolowicz, P; Bulski, W; Ulkowski, P
Purpose: To compare the reproducibility of the production of several cylindrical and plane-parallel chambers popular in Poland in terms of a calibration coefficient. Methods: The investigation was performed for PTW30013 (20 chambers), 30001 (10 chambers), FC65-G (17 chambers) cylindrical chambers and for PPC05 (14 chambers), Roos 34001 (8 chambers) plane parallel chambers. The calibration factors were measured at the same accredited secondary standard laboratory in terms of a dose to water. All the measurements were carried out at the same laboratory, by the same staff, in accordance with the same IAEA recommendations. All the chambers were calibrated in the Co60more » beam. Reproducibility was described in terms of the mean value, its standard deviation and the ratio of the maximum and minimum value of calibration factors for each set of chambers separately. The combined uncertainty budged (1SD) calculated according to the IAEA-TECDOC-1585 of the calibration factor was of 0.25%. Results: The calibration coefficients for PTW30013, 30001, and FC65-G chambers were 5.36±0.03, 5.28±0.06, 4.79±0.015 nC/Gy respectively and for PPC05, and Roos chambers were 59±2, 8.3±0.1 nC/Gy respectively. The maximum/minimum ratio of calibration factors for PTW30013, 30001, FC65-G, and for PPC05, Roos chambers were 1.03, 1.03, 1.01, 1.14 and 1.03 respectively. Conclusion: The production of all ion chambers was very reproducible except the Markus type PPC05 for which the ratio of maximum/minimum calibration coefficients of 1.14 was obtained.« less
77 FR 32444 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
... Internal Control Standards AGENCY: National Indian Gaming Commission. ACTION: Proposed rule. SUMMARY: The National Indian Gaming Commission (NIGC) proposes to amend its minimum internal control standards for Class... the Federal Register called Minimum Internal Control Standards. 64 FR 590. The rule added a new part...
Gauging the Nearness and Size of Cycle Minimum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1997-01-01
By definition, the conventional onset for the start of a sunspot cycle is the time when smoothed sunspot number (i.e., the 12-month moving average) has decreased to its minimum value (called minimum amplitude) prior to the rise to its maximum value (called maximum amplitude) for the given sunspot cycle. On the basis (if the modern era sunspot cycles 10-22 and on the presumption that cycle 22 is a short-period cycle having a cycle length of 120 to 126 months (the observed range of short-period modern era cycles), conventional onset for cycle 23 should not occur until sometime between September 1996 and March 1997, certainly between June 1996 and June 1997, based on the 95-percent confidence level deduced from the mean and standard deviation of period for the sample of six short-pei-iod modern era cycles. Also, because the first occurrence of a new cycle, high-latitude (greater than or equal to 25 degrees) spot has always preceded conventional onset of the new cycle by at least 3 months (for the data-available interval of cycles 12-22), conventional onset for cycle 23 is not expected until about August 1996 or later, based on the first occurrence of a new cycle 23, high-latitude spot during the decline of old cycle 22 in May 1996. Although much excitement for an earlier-occurring minimum (about March 1996) for cycle 23 was voiced earlier this year, the present study shows that this exuberance is unfounded. The decline of cycle 22 continues to favor cycle 23 minimum sometime during the latter portion of 1996 to the early portion of 1997.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Comparative study of the physical properties of core materials.
Saygili, Gülbin; Mahmali, Sevil M
2002-08-01
This study was undertaken to measure physical properties of materials used for direct core buildups, including high-copper amalgam, visible light-cured resin composite, autocured titanium-containing composite, polyacid-modified composite, resin-modified glass-ionomer, and silver cermet cement. Compressive strength, diametral tensile strength, and flexural strength of six core materials of various material classes were measured for each material as a function of time up to 3 months at different storage conditions, using a standard specification test designed for the materials. Three different storage conditions (dry, humid, wet) at 37 degrees C were chosen. Materials were manipulated according to manufacturers' instructions for use as cores. Mean compressive, diametral tensile, and flexural strengths with associated standard deviations were calculated for each material. Multiple comparison and Newman-Keuls tests discerned many differences among materials. All materials were found to meet the minimum specification requirements, except in terms of flexural strength for amalgam after 1 hour and the silver cermet at all time intervals.
Protection from sunburn with beta-Carotene--a meta-analysis.
Köpcke, Wolfgang; Krutmann, Jean
2008-01-01
Nutritional protection against skin damage from sunlight is increasingly advocated to the general public, but its effectiveness is controversial. In this meta-analysis, we have systematically reviewed the existing literature on human supplementation studies on dietary protection against sunburn by beta-carotene. A review of literature until June 2007 was performed in PubMed, ISI Web of Science and EBM Cochrane library and identified a total of seven studies which evaluated the effectiveness of beta-carotene in protection against sunburn. Data were abstracted from these studies by means of a standardized data collection protocol. The subsequent meta-analysis showed that (1) beta-carotene supplementation protects against sunburn and (2) the study duration had a significant influence on the effected size. Regression plot analysis revealed that protection required a minimum of 10 weeks of supplementation with a mean increase of the protective effect of 0.5 standard deviations with every additional month of supplementation. Thus, dietary supplementation of humans with beta-carotene provides protection against sunburn in a time-dependent manner.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Chang, Hsiao‐Han; Lee, Hsiao‐Fei; Sung, Chien‐Cheng; Liao, Tsung‐I
2013-01-01
A frameless radiosurgery system is using a set of thermoplastic mask for fixation and stereoscopic X‐ray imaging for alignment. The accuracy depends on mask fixation and imaging. Under certain circumstances, the guidance images may contain insufficient bony structures, resulting in lesser accuracy. A virtual isocenter function is designed for such scenarios. In this study, we investigated the immobilization and the indications for using virtual isocenter. Twenty‐four arbitrary imaginary treatment targets (ITTs) in phantom were evaluated. The external Localizer with positioner films was used as reference. The alignments by using actual and virtual isocenter in image guidance were compared. The deviation of the alignment after mask removing and then resetting was also checked. The results illustrated that the mean deviation between the alignment by image guidance using actual isocenter (Isoimg) and the localizer(Isoloc) was 2.26mm±1.16mm (standard deviation, SD), 1.66mm±0.83mm for using virtual isocenter. The deviation of the alignment by the image guidance using actual isocenter to the localizer before and after mask resetting was 7.02mm±5.8mm. The deviations before and after mask resetting were insignificant for the target center from skull edge larger than 80 mm on craniocaudal direction. The deviations between the alignment using actual and virtual isocenter in image guidance were not significant if the minimum distance from target center to skull edge was larger or equal to 30 mm. Due to an unacceptable deviation after mask resetting, the image guidance is necessary to improve the accuracy of frameless immobilization. A treatment isocenter less than 30 mm from the skull bone should be an indication for using virtual isocenter to align in image guidance. The virtual isocenter should be set as caudally as possible, and the sella of skull should be the ideal point. PACS numbers: 87.55.kh, 87.55.ne, 87.55.tm PMID:23835379
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C
2009-11-01
During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.
Spatial variability in airborne pollen concentrations.
Raynor, G S; Ogden, E C; Hayes, J V
1975-03-01
Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.
Muhamad, Hairul Masrini; Xu, Xiaomei; Zhang, Xuelei; Jaaman, Saifullah Arifin; Muda, Azmi Marzuki
2018-05-01
Studies of Irrawaddy dolphins' acoustics assist in understanding the behaviour of the species and thereby conservation of this species. Whistle signals emitted by Irrawaddy dolphin within the Bay of Brunei in Malaysian waters were characterized. A total of 199 whistles were analysed from seven sightings between January and April 2016. Six types of whistles contours named constant, upsweep, downsweep, concave, convex, and sine were detected when the dolphins engaged in traveling, foraging, and socializing activities. The whistle durations ranged between 0.06 and 3.86 s. The minimum frequency recorded was 443 Hz [Mean = 6000 Hz, standard deviation (SD) = 2320 Hz] and the maximum frequency recorded was 16 071 Hz (Mean = 7139 Hz, SD = 2522 Hz). The mean frequency range (F.R.) for the whistles was 1148 Hz (Minimum F.R. = 0 Hz, Maximum F.R. = 4446 Hz; SD = 876 Hz). Whistles in the Bay of Brunei were compared with population recorded from the waters of Matang and Kalimantan. The comparisons showed differences in whistle duration, minimum frequency, start frequency, and number of inflection point. Variation in whistle occurrence and frequency may be associated with surface behaviour, ambient noise, and recording limitation. This will be an important element when planning a monitoring program.
Demonstration of the Gore Module for Passive Ground Water Sampling
2014-06-01
ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70
12 CFR 3.11 - Standards for determination of appropriate individual minimum capital ratios.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Standards for determination of appropriate individual minimum capital ratios. 3.11 Section 3.11 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Establishment of Minimum Capital Ratios for an Individual Bank § 3.11 Standards...
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
40 CFR 131.6 - Minimum requirements for water quality standards submission.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Minimum requirements for water quality standards submission. 131.6 Section 131.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS General Provisions § 131.6 Minimum requirements for water quality standards submission. The...
40 CFR 131.6 - Minimum requirements for water quality standards submission.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Minimum requirements for water quality standards submission. 131.6 Section 131.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS General Provisions § 131.6 Minimum requirements for water quality standards submission. The...
Phase noise characterization of a QD-based diode laser frequency comb.
Vedala, Govind; Al-Qadi, Mustafa; O'Sullivan, Maurice; Cartledge, John; Hui, Rongqing
2017-07-10
We measure, simultaneously, the phases of a large set of comb lines from a passively mode locked, InAs/InP, quantum dot laser frequency comb (QDLFC) by comparing the lines to a stable comb reference using multi-heterodyne coherent detection. Simultaneity permits the separation of differential and common mode phase noise and a straightforward determination of the wavelength corresponding to the minimum width of the comb line. We find that the common mode and differential phases are uncorrelated, and measure for the first time for a QDLFC that the intrinsic differential-mode phase (IDMP) between adjacent subcarriers is substantially the same for all subcarrier pairs. The latter observation supports an interpretation of 4.4ps as the standard deviation of IDMP on a 200µs time interval for this laser.
A cloud physics investigation utilizing Skylab data
NASA Technical Reports Server (NTRS)
Alishouse, J.; Jacobowitz, H.; Wark, D. (Principal Investigator)
1975-01-01
The author has identified the following significant results. The Lowtran 2 program, S191 spectral response, and solar spectrum were used to compute the expected absorption by 2.0 micron band for a variety of cloud pressure levels and solar zenith angles. Analysis of the three long wavelength data channels continued in which it was found necessary to impose a minimum radiance criterion. It was also found necessary to modify the computer program to permit the computation of mean values and standard deviations for selected subsets of data on a given tape. A technique for computing the integrated absorption in the A band was devised. The technique normalizes the relative maximum at approximately .78 micron to the solar irradiance curve and then adjusts the relative maximum at approximately .74 micron to fit the solar curve.
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
NASA Astrophysics Data System (ADS)
Pustil'Nik, Lev A.; Dorman, L. I.; Yom Din, G.
2003-07-01
The database of Professor Rogers, with wheat prices in England in the Middle Ages (1249-1703) was used to search for possible manifestations of solar activity and cosmic ray variations. The main object of the statistical analysis is investigation of bursts of prices. We present a conceptual model of possible modes for sensitivity of wheat prices to weather conditions, caused by solar cycle variations in cosmic rays, and compare the expected price fluctuations with wheat price variations recorded in the Medieval England. We compared statistical properties of the intervals between price bursts with statistical properties of the intervals between extremes (minimums) of solar cycles during the years 1700-2000. Statistical properties of these two samples are similar both in averaged/median values of intervals and in standard deviation of this values. We show that histogram of intervals distribution for price bursts and solar minimums are coincidence with high confidence level. We analyzed direct links between wheat prices and solar activity in the th 17 Century, for which wheat prices and solar activity data as well as cosmic ray intensity (from 10 Be isotop e) are available. We show that for all seven solar activity minimums the observed prices were higher than prices for the nine intervals of maximal solar activity proceed preceding to the minimums. This result, combined with the conclusion on similarity of statistical properties of the price bursts and solar activity extremes we consider as direct evidence of a causal connection between wheat prices bursts and solar activity.
Estimation of the neural drive to the muscle from surface electromyograms
NASA Astrophysics Data System (ADS)
Hofmann, David
Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.
Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar
2018-03-01
This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility (< 10 cycles per minute), and the difference between near and distance phoria (> 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near phoria, and monocular accommodative facility yield good sensitivity and specificity for diagnosis of NSBVAs in a community set-up. © 2017 Optometry Australia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequencemore » (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; ...
2017-08-08
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for patron deposit accounts and cashless systems? 543.14 Section 543.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.14 What are the minimum internal control...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for gaming promotions and player tracking systems? 543.12 Section 543.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.12 What are the minimum internal contro...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for patron deposit accounts and cashless systems? 543.14 Section 543.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.14 What are the minimum internal control...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How do these regulations affect minimum internal control standards established in a Tribal-State compact? 542.4 Section 542.4 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.4 How do these regulations affect minimum internal...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for gaming promotions and player tracking systems? 543.12 Section 543.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.12 What are the minimum internal contro...
Xie, Wangshen; Orozco, Modesto; Truhlar, Donald G; Gao, Jiali
2009-02-17
A recently proposed electronic structure-based force field called the explicit polarization (X-Pol) potential is used to study many-body electronic polarization effects in a protein, in particular by carrying out a molecular dynamics (MD) simulation of bovine pancreatic trypsin inhibitor (BPTI) in water with periodic boundary conditions. The primary unit cell is cubic with dimensions ~54 × 54 × 54 Å(3), and the total number of atoms in this cell is 14281. An approximate electronic wave function, consisting of 29026 basis functions for the entire system, is variationally optimized to give the minimum Born-Oppenheimer energy at every MD step; this allows the efficient evaluation of the required analytic forces for the dynamics. Intramolecular and intermolecular polarization and intramolecular charge transfer effects are examined and are found to be significant; for example, 17 out of 58 backbone carbonyls differ from neutrality on average by more than 0.1 electron, and the average charge on the six alanines varies from -0.05 to +0.09. The instantaneous excess charges vary even more widely; the backbone carbonyls have standard deviations in their fluctuating net charges from 0.03 to 0.05, and more than half of the residues have excess charges whose standard deviation exceeds 0.05. We conclude that the new-generation X-Pol force field permits the inclusion of time-dependent quantum mechanical polarization and charge transfer effects in much larger systems than was previously possible.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-09
...). Accordingly, AMS published a notice of review and request for written comments on the Standards in the April...; FV10-996-610 Review] Minimum Quality and Handling Standards for Domestic and Imported Peanuts Marketed... the Minimum Quality and Handling Standards for Domestic and Imported Peanuts Marketed in the United...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markovic, M; Stathakis, S; Jurkovic, I
Purpose The aim for the study was to compare intrinsic characteristics of the nine detectors and evaluate their performance in non-equilibrium radiation dosimetry. Methods The intrinsic characteristics of the nine detectors that were evaluated are based on the composition and size of the active volume, operating voltage, initial recombination of the collected charge, temperature, the effective cross section of the detectors. The shortterm stability and collection efficiency has been investigated. The minimum radiation detection sensitivity and detectors leakage current has been measured. The sensitivity to changes in energy spectrum as well as change in incident beam angles were measured anmore » analyzed. Results The short-term stability of the measurements within every detector showed consistency in the measured values with the highest value of the standard deviation of the mean not exceeding 0.5%. Air ion chamber detectors showed minimum sensitivity to change in incident beam angles while diode detectors underestimated measurements up to 16%. Comparing the slope of the tangents for detector’s sensitivity curve, diode detectors illustrate more sensitivity to change in photon spectrum than ion chamber detectors. The change in radiation detection sensitivity with increase in dose delivered has been observed for semiconductor detectors with maximum deviation 0.01% for doses between 1 Gy and 10 Gy. Leakage current has been mainly influenced by bias voltage (ion chamber detectors) and room light intensity (diode detectors). With dose per pulse varying from 1.47E−4 to 5.1E−4 Gy/pulse the maximum change in collection efficiency was 1.4% for the air ion chambers up to 8% for liquid filled ion chamber. Conclusion Broad range of measurements performed showed all the detectors susceptible to some limitations and while they are suitable for use in broad scope of applications, careful selection has to be made for particular range of measurements.« less
Effect of Variable Spatial Scales on USLE-GIS Computations
NASA Astrophysics Data System (ADS)
Patil, R. J.; Sharma, S. K.
2017-12-01
Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.
Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa
2013-03-01
To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate < or = 0.05). The standard deviation indices(SDI) were calculated by using reported results, mean and standard deviation values for the respective antimicrobial agents tested. In the evaluation of accuracy, mean value from each laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.
[Optimization of cluster analysis based on drug resistance profiles of MRSA isolates].
Tani, Hiroya; Kishi, Takahiko; Gotoh, Minehiro; Yamagishi, Yuka; Mikamo, Hiroshige
2015-12-01
We examined 402 methicillin-resistant Staphylococcus aureus (MRSA) strains isolated from clinical specimens in our hospital between November 19, 2010 and December 27, 2011 to evaluate the similarity between cluster analysis of drug susceptibility tests and pulsed-field gel electrophoresis (PFGE). The results showed that the 402 strains tested were classified into 27 PFGE patterns (151 subtypes of patterns). Cluster analyses of drug susceptibility tests with the cut-off distance yielding a similar classification capability showed favorable results--when the MIC method was used, and minimum inhibitory concentration (MIC) values were used directly in the method, the level of agreement with PFGE was 74.2% when 15 drugs were tested. The Unweighted Pair Group Method with Arithmetic mean (UPGMA) method was effective when the cut-off distance was 16. Using the SIR method in which susceptible (S), intermediate (I), and resistant (R) were coded as 0, 2, and 3, respectively, according to the Clinical and Laboratory Standards Institute (CLSI) criteria, the level of agreement with PFGE was 75.9% when the number of drugs tested was 17, the method used for clustering was the UPGMA, and the cut-off distance was 3.6. In addition, to assess the reproducibility of the results, 10 strains were randomly sampled from the overall test and subjected to cluster analysis. This was repeated 100 times under the same conditions. The results indicated good reproducibility of the results, with the level of agreement with PFGE showing a mean of 82.0%, standard deviation of 12.1%, and mode of 90.0% for the MIC method and a mean of 80.0%, standard deviation of 13.4%, and mode of 90.0% for the SIR method. In summary, cluster analysis for drug susceptibility tests is useful for the epidemiological analysis of MRSA.
Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2018-01-01
Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512
Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2017-09-01
Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. The AL methods: produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p=0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275-0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers' different models during the training phase, compared to the variance of the induced models' AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p=0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p=0.29), as was the difference between the Combination_XA and Exploitation methods (p=0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p=0.014), but not when using any of the three AL methods. The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group's individual labelers. Finally, using the AL methods: when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. Copyright © 2017 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2013 CFR
2013-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2011 CFR
2011-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2010 CFR
2010-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
Code of Federal Regulations, 2014 CFR
2014-10-01
... manufactured passenger automobile minimum standard. 536.9 Section 536.9 Transportation Other Regulations... domestically manufactured passenger automobile minimum standard. (a) Each manufacturer is responsible for..., the domestically manufactured passenger automobile compliance category credit excess or shortfall is...
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This investigation demonstrates the potential for the optimization and validation of more sophisticated bioheat models that incorporate the uncertainty of the data into the predictions, e.g. stochastic finite element methods.« less
Anti-inflammatory drugs and prediction of new structures by comparative analysis.
Bartzatt, Ronald
2012-01-01
Nonsteroidal anti-inflammatory drugs (NSAIDs) are a group of agents important for their analgesic, anti-inflammatory, and antipyretic properties. This study presents several approaches to predict and elucidate new molecular structures of NSAIDs based on 36 known and proven anti-inflammatory compounds. Based on 36 known NSAIDs the mean value of Log P is found to be 3.338 (standard deviation= 1.237), mean value of polar surface area is 63.176 Angstroms2 (standard deviation = 20.951 A2), and the mean value of molecular weight is 292.665 (standard deviation = 55.627). Nine molecular properties are determined for these 36 NSAID agents, including Log P, number of -OH and -NHn, violations of Rule of 5, number of rotatable bonds, and number of oxygens and nitrogens. Statistical analysis of these nine molecular properties provides numerical parameters to conform to in the design of novel NSAID drug candidates. Multiple regression analysis is accomplished using these properties of 36 agents followed with examples of predicted molecular weight based on minimum and maximum property values. Hierarchical cluster analysis indicated that licofelone, tolfenamic acid, meclofenamic acid, droxicam, and aspirin are substantially distinct from all remaining NSAIDs. Analysis of similarity (ANOSIM) produced R = 0.4947, which indicates low to moderate level of dissimilarity between these 36 NSAIDs. Non-hierarchical K-means cluster analysis separated the 36 NSAIDs into four groups having members of greatest similarity. Likewise, discriminant analysis divided the 36 agents into two groups indicating the greatest level of distinction (discrimination) based on nine properties. These two multivariate methods together provide investigators a means to compare and elucidate novel drug designs to 36 proven compounds and ascertain to which of those are most analogous in pharmacodynamics. In addition, artificial neural network modeling is demonstrated as an approach to predict numerous molecular properties of new drug designs that is based on neural training from 36 proven NSAIDs. Comprehensive and effective approaches are presented in this study for the design of new NSAID type agents which are so very important for inhibition of COX-2 and COX-1 isoenzymes.
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.
Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R
2016-11-01
Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhtiari, M; Schmitt, J; Sarfaraz, M
2015-06-15
Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varyingmore » from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.« less
Bogucki, Artur J
2014-01-01
The knee joint is a bicondylar hinge two-level joint with six degrees of freedom. The location of the functional axis of flexion-extension motion is still a subject of research and discussions. During the swing phase, the femoral condyles do not have direct contact with the tibial articular surfaces and the intra-articular space narrows with increasing weight bearing. The geometry of knee movements is determined by the shape of articular surfaces. A digital recording of the gait of a healthy volunteer was analysed. In the first experimental variant, the subject was wearing a knee orthosis controlling flexion and extension with a hinge-type single-axis joint. In the second variant, the examination involved a hinge-type double-axis orthosis. Statistical analysis involved mathematically calculated values of displacement P. Scatter graphs with a fourth-order polynomial trend line with a confidence interval of 0.95 due to noise were prepared for each experimental variant. In Variant 1, the average displacement was 15.1 mm, the number of tests was 43, standard deviation was 8.761, and the confidence interval was 2.2. The maximum value of displacement was 30.9 mm and the minimum value was 0.7 mm. In Variant 2, the average displacement was 13.4 mm, the number of tests was 44, standard deviation was 7.275, and the confidence interval was 1.8. The maximum value of displacement was 30.2 mm and the minimum value was 3.4 mm. An analysis of moving averages for both experimental variants revealed that displacement trends for both types of orthosis were compatible from the mid-stance to the mid-swing phase. 1. The method employed in the experiment allows for determining the alignment between the axis of the knee joint and that of shin and thigh orthoses. 2. Migration of the single and double-axis orthoses during the gait cycle exceeded 3 cm. 3. During weight bearing, the double-axis orthosis was positioned more correctly. 4. The study results may be helpful in designing new hinge-type knee joints.
NASA Astrophysics Data System (ADS)
Lee, H.; Sheen, D.; Kim, S.
2013-12-01
The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values of the earthquake catalogs of the Korea Meteorological Administration and Kim (2012) are estimated to be 0.72 and 0.74, respectively.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
NASA Astrophysics Data System (ADS)
Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.
2017-11-01
Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.
5 CFR 890.201 - Minimum standards for health benefits plans.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.201 Minimum standards for health benefits plans. (a) To qualify for approval by OPM, a health benefits plan...
5 CFR 890.201 - Minimum standards for health benefits plans.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Minimum standards for health benefits... SERVICE REGULATIONS (CONTINUED) FEDERAL EMPLOYEES HEALTH BENEFITS PROGRAM Health Benefits Plans § 890.201 Minimum standards for health benefits plans. (a) To qualify for approval by OPM, a health benefits plan...
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Determination of selected anions in water by ion chromatography
Fishman, Marvin J.; Pyen, Grace
1979-01-01
Ion chromatography is a rapid, sensitive, precise, and accurate method for the determination of major anions in rain water and surface waters. Simultaneous analyses of a single sample for bromide, chloride, fluoride, nitrate, nitrite, orthophosphate, and sulfate require approximately 20 minutes to obtain a chromatogram.Minimum detection limits range from 0.01 milligrams per liter for fluoride to 0.20 milligrams per liter for chloride and sulfate. Percent relative standard deviations were less than nine percent for all anions except nitrite in Standard Reference Water Samples. Only one reference sample contained nitrite and its concentration was near the minimum level of detection. Similar precision was found for chloride, nitrate, and sulfate at concentrations less than 5 milligrams per liter in rainfall samples. Precision for fluoride ranged from 12 to 22 percent, but is attributed to the low concentrations in these samples. The other anions were not detected.To determine accuracy of results, several samples were spiked with known concentrations of fluoride, chloride, nitrate, and sulfate; recoveries ranged from 96 to 103 percent. Known amounts of bromide and phosphate were added, separately, to several other waters, which contained bromide or phosphate. Recovery of added bromide and phosphate ranged from approximately 95 to 104 percent. No recovery data were obtained for nitrite.Chloride, nitrate, nitrite, orthophosphate, and sulfate, in several samples, were also determined independently by automated colorimetric procedures. An automated ion-selective electrode method was used to determine fluoride. Results are in agreement with results obtained by ion chromatography.
Zhang, You; Yin, Fang-Fang; Ren, Lei
2015-08-01
Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
Sphalerons in composite and nonstandard Higgs models
NASA Astrophysics Data System (ADS)
Spannowsky, Michael; Tamarit, Carlos
2017-01-01
After the discovery of the Higgs boson and the rather precise measurement of all electroweak boson's masses the local structure of the electroweak symmetry breaking potential is already quite well established. However, despite being a key ingredient to a fundamental understanding of the underlying mechanism of electroweak symmetry breaking, the global structure of the electroweak potential remains entirely unknown. The existence of sphalerons, unstable solutions of the classical action of motion that are interpolating between topologically distinct vacua, is a direct consequence of the Standard Model's SU (2 )L gauge group. Nevertheless, the sphaleron energy depends on the shape of the Higgs potential away from the minimum and can therefore be a litmus test for its global structure. Focusing on two scenarios, the minimal composite Higgs model SO (5 )/SO (4 ) or an elementary Higgs with a deformed electroweak potential, we calculate the change of the sphaleron energy compared to the Standard Model prediction. We find that the sphaleron energy would have to be measured to O (10 )% accuracy to exclude sizeable global deviations from the Standard Model Higgs potential. We further find that because of the periodicity of the scalar potential in composite Higgs models a second sphaleron branch with larger energy arises.
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
Code of Federal Regulations, 2010 CFR
2010-04-01
... requirements to maintain minimum standards for Tribe/Consortium management systems? 1000.396 Section 1000.396... AGREEMENTS UNDER THE TRIBAL SELF-GOVERNMENT ACT AMENDMENTS TO THE INDIAN SELF-DETERMINATION AND EDUCATION ACT... minimum standards for Tribe/Consortium management systems? Yes, the Tribe/Consortium must maintain...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... Proposed Information Collection to OMB Minimum Property Standards for Multifamily and Care-Type Occupancy... Lists the Following Information Title of Proposal: Minimum Property Standards for Multifamily and Care-Type Occupancy Housing. OMB Approval Number: 2502-0321. Form Numbers: None. Description of the Need for...
USDA-ARS?s Scientific Manuscript database
We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the minimum information about any (x) sequence (MIxS). The standards are the minimum information about a single amplified genome (MISAG) and the ...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... Parts 30 and 3400 SAFE Mortgage Licensing Act: Minimum Licensing Standards and Oversight... No. FR-5271-F-03] RIN 2502-A170 SAFE Mortgage Licensing Act: Minimum Licensing Standards and... pursuant to the Secure and Fair Enforcement Mortgage Licensing Act of 2008 (SAFE Act or Act), to ensure...
The effect of time in use on the display performance of the iPad.
Caffery, Liam J; Manthey, Kenneth L; Sim, Lawrence H
2016-07-01
The aim of this study was to evaluate changes to the luminance, luminance uniformity and conformance to the digital imaging and communication in medicine greyscale standard display function (GSDF) as a function of time in use for the iPad. Luminance measurements of the American Association of Physicists in Medicine (AAPM) Group 18 task group (TG18) luminance uniformity and luminance test patterns (TG18-UNL and TG18-LN8) were performed using a calibrated near-range luminance meter. Nine sets of measurements were taken, where the time in use of the iPad ranged from 0 to 2500 h. The maximum luminance (Lmax) of the display decreased (367-338 cdm(-2)) as a function of time. The minimum luminance remained constant. The maximum non-uniformity coefficient was 11%. Luminance uniformity decreased slightly as a function of time in use. The conformance of the iPad deviated from the GSDF curve at commencement of use. Deviation did not increase as a function of time in use. This study has demonstrated that the iPad display exhibits luminance degradation typical of liquid crystal displays. The Lmax of the iPad fell below the American College of Radiology-AAPM-Society of Imaging Informatics in Medicine recommendations for primary displays (>350 cdm(-2)) at approximately 1000 h in use. The Lmax recommendation for secondary displays (>250 cdm(-2)) was exceeded during the entire study. The maximum non-uniformity coefficient did not exceed the recommendations for either primary or secondary displays. The deviation from the GSDF exceeded the recommendations of the TG18 for use as either a primary or secondary display. The brightness, uniformity and contrast response are reasonably stable over the useful lifetime of the device; however, the device fails to meet the contrast response standard for either a primary or secondary display.
Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Thompson, Bruce
2009-01-01
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
30 CFR 74.8 - Measurement, accuracy, and reliability requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...
The effects of auditory stimulation with music on heart rate variability in healthy women.
Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de
2013-07-01
There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.
The effects of auditory stimulation with music on heart rate variability in healthy women
Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos
2013-01-01
OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660
NASA Astrophysics Data System (ADS)
He, Jianbin; Zhang, Zhiyong; Shi, Yunyu; Liu, Haiyan
2003-08-01
We describe a method for efficient sampling of the energy landscape of a protein in atomic molecular dynamics simulations. A simulation is divided into alternatively occurring relaxation phases and excitation phases. In the relaxation phase (conventional simulation), we use a frequently updated reference structure and deviations from this reference structure to mark whether the system has been trapped in a local minimum. In that case, the simulation enters the excitation phase, during which a few slow collective modes of the system are coupled to a higher temperature bath. After the system has escaped from the minimum (also judged by deviations from the reference structure) the simulation reenters the relaxation phase. The collective modes are obtained from a coarse-grained Gaussian elastic network model. The scheme, which we call ACM-AME (amplified collective motion-assisted minimum escaping), is compared with conventional simulations as well as an alternative scheme that elevates the temperature of all degrees of freedom during the excitation phase (amplified overall motion-assisted minimum escaping, or AOM-AME). Comparison is made using simulations on four peptides starting from non-native extended or all helical structures. In terms of sampling low energy conformations and continuously sampling new conformations throughout a simulation, the ACM-AME scheme demonstrates very good performance while the AOM-AME scheme shows little improvement upon conventional simulations. Limited success is achieved in producing structures close to the native structures of the peptides: for an S-peptide analog, the ACM-AME approach is able to reproduce its native helical structure, and starting from an all-helical structure of the villin headpiece subdomain (HP-36) in implicit solvent, two out of three 150 ns ACM-AME runs are able to sample structures with 3-4 Å backbone root-mean-square deviations from the nuclear magnetic resonance structure of the protein.
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false What are the minimum technical standards for program storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum standards for game artwork, glass... the minimum standards for game artwork, glass, and rules? (a) Rules, instructions, and prize schedules...: (1) Game name, rules, and options such as the purchase or wager amount stated clearly and...
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum technical standards for program storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES...
25 CFR 547.13 - What are the minimum technical standards for program storage media?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum technical standards for program storage media? 547.13 Section 547.13 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES...
25 CFR 547.16 - What are the minimum standards for game artwork, glass, and rules?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum standards for game artwork, glass... the minimum standards for game artwork, glass, and rules? (a) Rules, instructions, and prize schedules...: (1) Game name, rules, and options such as the purchase or wager amount stated clearly and...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 4 2011-01-01 2011-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
Code of Federal Regulations, 2010 CFR
2010-04-01
... and enabling Class II gaming system components? 547.6 Section 547.6 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM TECHNICAL STANDARDS FOR GAMING EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.6 What are the minimum technical standards for enrolling and...
48 CFR 22.1002-4 - Application of the Fair Labor Standards Act minimum wage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Application of the Fair Labor Standards Act minimum wage. 22.1002-4 Section 22.1002-4 Federal Acquisition Regulations System... Service Contract Act of 1965, as Amended 22.1002-4 Application of the Fair Labor Standards Act minimum...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
Code of Federal Regulations, 2010 CFR
2010-04-01
... reconciliation process; (ii) Pull tabs, including but not limited to, statistical records, winner verification... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for... COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.42 What are...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
12 CFR 366.12 - What are the FDIC's minimum standards of ethical responsibility?
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false What are the FDIC's minimum standards of ethical responsibility? 366.12 Section 366.12 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION... § 366.12 What are the FDIC's minimum standards of ethical responsibility? (a) You and any person who...
25 CFR 543.23 - What are the minimum internal control standards for audit and accounting?
Code of Federal Regulations, 2014 CFR
2014-04-01
... supervision, bingo cards, bingo card sales, draw, prize payout; cash and equivalent controls, technologic aids... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for audit... INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.23 What are the...
25 CFR 543.23 - What are the minimum internal control standards for audit and accounting?
Code of Federal Regulations, 2013 CFR
2013-04-01
... supervision, bingo cards, bingo card sales, draw, prize payout; cash and equivalent controls, technologic aids... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for audit... INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS FOR CLASS II GAMING § 543.23 What are the...
USL/DBMS NASA/PC R and D project C programming standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
NASA Technical Reports Server (NTRS)
Corbett, Lee B.; Bierman, Paul R.; Graly, Joseph A.; Neumann, Thomas A.; Rood, Dylan H.
2013-01-01
High-latitude landscape evolution processes have the potential to preserve old, relict surfaces through burial by cold-based, nonerosive glacial ice. To investigate landscape history and age in the high Arctic, we analyzed in situ cosmogenic Be(sup 10) and Al (sup 26) in 33 rocks from Upernavik, northwest Greenland. We sampled adjacent bedrock-boulder pairs along a 100 km transect at elevations up to 1000 m above sea level. Bedrock samples gave significantly older apparent exposure ages than corresponding boulder samples, and minimum limiting ages increased with elevation. Two-isotope calculations Al(sup26)/B(sup 10) on 20 of the 33 samples yielded minimum limiting exposure durations up to 112 k.y., minimum limiting burial durations up to 900 k.y., and minimum limiting total histories up to 990 k.y. The prevalence of BE(sup 10) and Al(sup 26) inherited from previous periods of exposure, especially in bedrock samples at high elevation, indicates that these areas record long and complex surface exposure histories, including significant periods of burial with little subglacial erosion. The long total histories suggest that these high elevation surfaces were largely preserved beneath cold-based, nonerosive ice or snowfields for at least the latter half of the Quaternary. Because of high concentrations of inherited nuclides, only the six youngest boulder samples appear to record the timing of ice retreat. These six samples suggest deglaciation of the Upernavik coast at 11.3 +/- 0.5 ka (average +/- 1 standard deviation). There is no difference in deglaciation age along the 100 km sample transect, indicating that the ice-marginal position retreated rapidly at rates of approx.120 m yr(sup-1).
Vocal Parameters of Elderly Female Choir Singers
Aquino, Fernanda Salvatico de; Ferreira, Léslie Piccolotto
2015-01-01
Introduction Due to increased life expectancy among the population, studying the vocal parameters of the elderly is key to promoting vocal health in old age. Objective This study aims to analyze the profile of the extension of speech of elderly female choristers, according to age group. Method The study counted on the participation of 25 elderly female choristers from the Choir of Messianic Church of São Paulo, with ages varying between 63 and 82 years, and an average of 71 years (standard deviation of 5.22). The elders were divided into two groups: G1 aged 63 to 71 years and G2 aged 72 to 82. We asked that each participant count from 20 to 30 in weak, medium, strong, and very strong intensities. Their speech was registered by the software Vocalgrama that allows the evaluation of the profile of speech range. We then submitted the parameters of frequency and intensity to descriptive analysis, both in minimum and maximum levels, and range of spoken voice. Results The average of minimum and maximum frequencies were respectively 134.82–349.96 Hz for G1 and 137.28–348.59 Hz for G2; the average for minimum and maximum intensities were respectively 40.28–95.50 dB for G1 and 40.63–94.35 dB for G2; the vocal range used in speech was 215.14 Hz for G1 and 211.30 Hz for G2. Conclusion The minimum and maximum frequencies, maximum intensity, and vocal range presented differences in favor of the younger elder group. PMID:26722341
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suckling, Tara; Smith, Tony; Reed, Warren
2013-06-15
Optimal arterial opacification is crucial in imaging the pulmonary arteries using computed tomography (CT). This poses the challenge of precisely timing data acquisition to coincide with the transit of the contrast bolus through the pulmonary vasculature. The aim of this quality assurance exercise was to investigate if a change in CT pulmonary angiography (CTPA) scanning protocol resulted in improved opacification of the pulmonary arteries. Comparison was made between the smart prep protocol (SPP) and the test bolus protocol (TBP) for opacification in the pulmonary trunk. A total of 160 CTPA examinations (80 using each protocol) performed between January 2010 andmore » February 2011 were assessed retrospectively. CT attenuation coefficients were measured in Hounsfield Units (HU) using regions of interest at the level of the pulmonary trunk. The average pixel value, standard deviation (SD), maximum, and minimum were recorded. For each of these variables a mean value was then calculated and compared for these two CTPA protocols. Minimum opacification of 200 HU was achieved in 98% of the TBP sample but only 90% of the SPP sample. The average CT attenuation over the pulmonary trunk for the SPP was 329 (SD = ±21) HU, whereas for the TBP it was 396 (SD = ±22) HU (P = 0.0017). The TBP also recorded higher maximum (P = 0.0024) and minimum (P = 0.0039) levels of opacification. This study has found that a TBP resulted in significantly better opacification of the pulmonary trunk than the SPP.« less
Complete 3D kinematics of upper extremity functional tasks.
van Andel, Carolien J; Wolterbeek, Nienke; Doorenbosch, Caroline A M; Veeger, DirkJan H E J; Harlaar, Jaap
2008-01-01
Upper extremity (UX) movement analysis by means of 3D kinematics has the potential to become an important clinical evaluation method. However, no standardized protocol for clinical application has yet been developed, that includes the whole upper limb. Standardization problems include the lack of a single representative function, the wide range of motion of joints and the complexity of the anatomical structures. A useful protocol would focus on the functional status of the arm and particularly the orientation of the hand. The aim of this work was to develop a standardized measurement method for unconstrained movement analysis of the UX that includes hand orientation, for a set of functional tasks for the UX and obtain normative values. Ten healthy subjects performed four representative activities of daily living (ADL). In addition, six standard active range of motion (ROM) tasks were executed. Joint angles of the wrist, elbow, shoulder and scapula were analyzed throughout each ADL task and minimum/maximum angles were determined from the ROM tasks. Characteristic trajectories were found for the ADL tasks, standard deviations were generally small and ROM results were consistent with the literature. The results of this study could form the normative basis for the development of a 'UX analysis report' equivalent to the 'gait analysis report' and would allow for future comparisons with pediatric and/or pathologic movement patterns.
Minimum reporting standards for clinical research on groin pain in athletes
Delahunt, Eamonn; Thorborg, Kristian; Khan, Karim M; Robinson, Philip; Hölmich, Per; Weir, Adam
2015-01-01
Groin pain in athletes is a priority area for sports physiotherapy and sports medicine research. Heterogeneous studies with low methodological quality dominate research related to groin pain in athletes. Low-quality studies undermine the external validity of research findings and limit the ability to generalise findings to the target patient population. Minimum reporting standards for research on groin pain in athletes are overdue. We propose a set of minimum reporting standards based on best available evidence to be utilised in future research on groin pain in athletes. Minimum reporting standards are provided in relation to: (1) study methodology, (2) study participants and injury history, (3) clinical examination, (4) clinical assessment and (5) radiology. Adherence to these minimum reporting standards will strengthen the quality and transparency of research conducted on groin pain in athletes. This will allow an easier comparison of outcomes across studies in the future. PMID:26031644
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...
Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions
1981-02-01
the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for
Code of Federal Regulations, 2010 CFR
2010-04-01
... but not limited to, bingo card control, payout procedures, and cash reconciliation process; (ii) Pull... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for... COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.22 What are...
Code of Federal Regulations, 2010 CFR
2010-04-01
... but not limited to, bingo card control, payout procedures, and cash reconciliation process; (ii) Pull... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum internal control standards for... COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.32 What are...
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.
McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B
2017-02-01
We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.
Aqueous and vitreous penetration of linezolid and levofloxacin after oral administration.
George, Jomy M; Fiscella, Richard; Blair, Michael; Rodvold, Keith; Ulanski, Lawrence; Stokes, John; Blair, Norman; Pontiggia, Laura
2010-12-01
To evaluate the time course of drug concentrations achieved in aqueous (AQ), vitreous (V), and serum (S) compartments after oral administration of linezolid and levofloxacin. Randomized, clinical trial. Clinical practice. Sixteen patients (16 eyes) undergoing vitrectomy who had not had a prior pars plana vitrectomy in the study eye were randomly assigned to one of 4 groups. AQ, V, and S samples were obtained from all subjects after single concomitant doses of linezolid 600 mg and levofloxacin 750 mg between 1 and 12 h before the procedure: group A = 1-3 h; group B = 3-6 h; group C = 6-9 h; group D = 9-12 h. AQ, V, and S concentrations of linezolid and levofloxacin. Overall mean concentrations ± standard deviation (μg/mL) achieved by linezolid in AQ, V, and S compartments were 3.32 ± 2.06, 2.98 ± 1.87, and 7.91 ± 3.94, respectively. Overall mean concentrations ±standard deviation (μg/mL) achieved by levofloxacin in AQ, V, and S compartments were 2.19 ± 1.92, 1.95 ± 1.27, and 7.38 ± 3.47, respectively. Single concomitant doses of linezolid and levofloxacin achieved AQ and V concentrations above the minimum inhibitory concentration for 90% of common ocular gram-positive and gram-negative pathogens up to 12 h after administration. The combination of linezolid and levofloxacin represents a viable option for the prophylaxis and management of endophthalmitis.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
NASA Astrophysics Data System (ADS)
Wang, Y.
2011-01-01
The direct topographic effect (DTE) and indirect topographic effect (ITE) of Helmert's 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, S; Wang, Y; Weng, H
Purpose To evaluate image quality and radiation dose of routine abdomen computed tomography exam with the automatic current modulation technique (ATCM) performed in two different brand 64-slice CT scanners in our site. Materials and Methods A retrospective review of routine abdomen CT exam performed with two scanners; scanner A and scanner B in our site. To calculate standard deviation of the portal hepatic level with a region of interest of 12.5 mm x 12.5mm represented to the image noise. The radiation dose was obtained from CT DICOM image information. Using Computed tomography dose index volume (CTDIv) to represented CT radiationmore » dose. The patient data in this study were with normal weight (about 65–75 Kg). Results The standard deviation of Scanner A was smaller than scanner B, the scanner A might with better image quality than scanner B. On the other hand, the radiation dose of scanner A was higher than scanner B(about higher 50–60%) with ATCM. Both of them, the radiation dose was under diagnostic reference level. Conclusion The ATCM systems in modern CT scanners can contribute a significant reduction in radiation dose to the patient. But the reduction by ATCM systems from different CT scanner manufacturers has slightly variation. Whatever CT scanner we use, it is necessary to find the acceptable threshold of image quality with the minimum possible radiation exposure to the patient in agreement with the ALARA principle.« less
NASA Astrophysics Data System (ADS)
Jahanianl, Nahid; Aram, Majid; Morshedian, Nader; Mehramiz, Ahmad
2018-03-01
In this report, the distribution of and deviation in the electric field were investigated in the active medium of a TE CO2 laser. The variation in the electric field is due to injection of net electron and proton charges as a plasma generator. The charged-particles beam density is assumed to be Gaussian. The electric potential and electric field distribution were simulated by solving Poisson’s equation using the SOR numerical method. The minimum deviation of the electric field obtained was about 2.2% and 6% for the electrons and protons beams, respectively, for a charged-particles beam-density of 106 cm-3. This result was obtained for a system geometry ensuring a mean-free-path of the particles beam of 15 mm. It was also found that the field deviation increases for a the mean-free-path smaller than that or larger than 25 mm. Moreover, the electric field deviation decreases when the electrons beam density exceeds 106 cm-3.
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
78 FR 63873 - Minimum Internal Control Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-25
... Internal Control Standards AGENCY: National Indian Gaming Commission, Interior. ACTION: Final rule. SUMMARY: The National Indian Gaming Commission (NIGC) amends its minimum internal control standards for Class... Internal Control Standards. 64 FR 590. The rule added a new part to the Commission's regulations...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Are there any minimum employment standards for Indian country law enforcement personnel? 12.31 Section 12.31 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Qualifications and Training Requirements § 12.31 Are there any minimum employment standards...
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
Statistical Analysis of 30 Years Rainfall Data: A Case Study
NASA Astrophysics Data System (ADS)
Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.
2017-07-01
Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Frey, Allison; Mika, Stephanie; Nuzum, Rachel; Schoen, Cathy
2009-06-01
Many proposed health insurance reforms would establish a federal minimum benefit standard--a baseline set of benefits to ensure that people have adequate coverage and financial protection when they purchase insurance. Currently, benefit mandates are set at the state level; these vary greatly across states and generally target specific areas rather than set an overall standard for what qualifies as health insurance. This issue brief considers what a broad federal minimum standard might look like by comparing existing state benefit mandates with the services and providers covered under the Federal Employees Health Benefits Program (FEHBP) Blue Cross and Blue Shield standard benefit package, an example of minimum creditable coverage that reflects current standard practice among employer-sponsored health plans. With few exceptions, benefits in the FEHBP standard option either meet or exceed those that state mandates require-indicating that a broad-based national benefit standard would include most existing state benefit mandates.
Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi
2015-01-01
Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the minimum wage required by section 6(a) of the Fair Labor Standards Act? 520.200 Section 520.200... lower than the minimum wage required by section 6(a) of the Fair Labor Standards Act? Section 14(a) of..., for the payment of special minimum wage rates to workers employed as messengers, learners (including...
Minimum Standards for Tribal Child Care Centers.
ERIC Educational Resources Information Center
Administration on Children, Youth, and Families (DHHS), Washington, DC. Child Care Bureau.
These minimum standards for tribal child care centers are being issued as guidance. An interim period of at least 1 year will allow tribal agencies to identify implementation issues, ensure that the standards reflect tribal needs, and guarantee that the standards provide adequate protection for children. The standards will be issued as regulations…
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
7 CFR 51.311 - Marking requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... STANDARDS) United States Standards for Grades of Apples Marking Requirements § 51.311 Marking requirements... minimum diameter of apples packed in a closed container shall be indicated on the container. For apple... varieties, the minimum diameter and minimum weight of apples packed in a closed container shall be indicated...
7 CFR 51.311 - Marking requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS) United States Standards for Grades of Apples Marking Requirements § 51.311 Marking requirements... minimum diameter of apples packed in a closed container shall be indicated on the container. For apple... varieties, the minimum diameter and minimum weight of apples packed in a closed container shall be indicated...
7 CFR 51.311 - Marking requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... STANDARDS) United States Standards for Grades of Apples Marking Requirements § 51.311 Marking requirements... minimum diameter of apples packed in a closed container shall be indicated on the container. For apple... varieties, the minimum diameter and minimum weight of apples packed in a closed container shall be indicated...
A damage analysis for brittle materials using stochastic micro-structural information
NASA Astrophysics Data System (ADS)
Lin, Shih-Po; Chen, Jiun-Shyan; Liang, Shixue
2016-03-01
In this work, a micro-crack informed stochastic damage analysis is performed to consider the failures of material with stochastic microstructure. The derivation of the damage evolution law is based on the Helmholtz free energy equivalence between cracked microstructure and homogenized continuum. The damage model is constructed under the stochastic representative volume element (SRVE) framework. The characteristics of SRVE used in the construction of the stochastic damage model have been investigated based on the principle of the minimum potential energy. The mesh dependency issue has been addressed by introducing a scaling law into the damage evolution equation. The proposed methods are then validated through the comparison between numerical simulations and experimental observations of a high strength concrete. It is observed that the standard deviation of porosity in the microstructures has stronger effect on the damage states and the peak stresses than its effect on the Young's and shear moduli in the macro-scale responses.
Spread of risk across financial markets: better to invest in the peripheries
NASA Astrophysics Data System (ADS)
Pozzi, F.; Di Matteo, T.; Aste, T.
2013-04-01
Risk is not uniformly spread across financial markets and this fact can be exploited to reduce investment risk contributing to improve global financial stability. We discuss how, by extracting the dependency structure of financial equities, a network approach can be used to build a well-diversified portfolio that effectively reduces investment risk. We find that investments in stocks that occupy peripheral, poorly connected regions in financial filtered networks, namely Minimum Spanning Trees and Planar Maximally Filtered Graphs, are most successful in diversifying, improving the ratio between returns' average and standard deviation, reducing the likelihood of negative returns, while keeping profits in line with the general market average even for small baskets of stocks. On the contrary, investments in subsets of central, highly connected stocks are characterized by greater risk and worse performance. This methodology has the added advantage of visualizing portfolio choices directly over the graphic layout of the network.
Aurelia aurita bio-inspired tilt sensor
NASA Astrophysics Data System (ADS)
Smith, Colin; Villanueva, Alex; Priya, Shashank
2012-10-01
The quickly expanding field of mobile robots, unmanned underwater vehicles, and micro-air vehicles urgently needs a cheap and effective means for measuring vehicle inclination. Commonly, tilt or inclination has been mathematically derived from accelerometers; however, there is inherent error in any indirect measurement. This paper reports a bio-inspired tilt sensor that mimics the natural balance organ of jellyfish, called the ‘statocyst’. Biological statocysts from the species Aurelia aurita were characterized by scanning electron microscopy to investigate the morphology and size of the natural sensor. An artificial tilt sensor was then developed by using printed electronics that incorporates a novel voltage divider concept in conjunction with small surface mount devices. This sensor was found to have minimum sensitivity of 4.21° with a standard deviation of 1.77°. These results open the possibility of developing elegant tilt sensor architecture for both air and water based platforms.
Spread of risk across financial markets: better to invest in the peripheries
Pozzi, F.; Di Matteo, T.; Aste, T.
2013-01-01
Risk is not uniformly spread across financial markets and this fact can be exploited to reduce investment risk contributing to improve global financial stability. We discuss how, by extracting the dependency structure of financial equities, a network approach can be used to build a well-diversified portfolio that effectively reduces investment risk. We find that investments in stocks that occupy peripheral, poorly connected regions in financial filtered networks, namely Minimum Spanning Trees and Planar Maximally Filtered Graphs, are most successful in diversifying, improving the ratio between returns' average and standard deviation, reducing the likelihood of negative returns, while keeping profits in line with the general market average even for small baskets of stocks. On the contrary, investments in subsets of central, highly connected stocks are characterized by greater risk and worse performance. This methodology has the added advantage of visualizing portfolio choices directly over the graphic layout of the network. PMID:23588852
Kueseng, Pamornrat; Pawliszyn, Janusz
2013-11-22
A new thin-film, carboxylated multiwalled carbon nanotubes/polydimethylsiloxane (MWCNTs-COOH/PDMS) coating was developed for 96-blade solid-phase microextraction (SPME) system followed by high performance liquid chromatography with ultraviolet detection (HPLC-UV). The method provided good extraction efficiency (64-90%) for three spiked levels, with relative standard deviations (RSD)≤6%, and detection limits between 1 and 2 μg/L for three phenolic compounds. The MWCNTs-COOH/PDMS 96-blade SPME system presents advantages over traditional methods due to its simplicity of use, easy coating preparation, low cost and high sample throughput (2.1 min per sample). The developed coating is reusable for a minimum of 110 extractions with good extraction efficiency. The coating provided higher extraction efficiency (3-8 times greater) than pure PDMS coatings. Copyright © 2013 Elsevier B.V. All rights reserved.
Implicit Large-Eddy Simulations of Zero-Pressure Gradient, Turbulent Boundary Layer
NASA Technical Reports Server (NTRS)
Sekhar, Susheel; Mansour, Nagi N.
2015-01-01
A set of direct simulations of zero-pressure gradient, turbulent boundary layer flows are conducted using various span widths (62-630 wall units), to document their influence on the generated turbulence. The FDL3DI code that solves compressible Navier-Stokes equations using high-order compact-difference scheme and filter, with the standard recycling/rescaling method of turbulence generation, is used. Results are analyzed at two different Re values (500 and 1,400), and compared with spectral DNS data. They show that a minimum span width is required for the mere initiation of numerical turbulence. Narrower domains ((is) less than 100 w.u.) result in relaminarization. Wider spans ((is) greater than 600 w.u.) are required for the turbulent statistics to match reference DNS. The upper-wall boundary condition for this setup spawns marginal deviations in the mean velocity and Reynolds stress profiles, particularly in the buffer region.
Yönt, Gülendam Hakverdioğlu; Korhan, Esra Akin; Dizer, Berna; Gümüş, Fatma; Koyuncu, Rukiye
2014-01-01
Nurses are more likely to face the dilemma of whether to resort to physical restraints or not and have a hard time making that decision. This is a descriptive study. A total of 55 nurses participated in the research. For data collection, a question form developed by researchers to determine perceptions of ethical dilemmas by nurses in the application of physical restraint was used. A descriptive analysis was made by calculating the mean, standard deviation, and maximum and minimum values. The nurses expressed (36.4%) having difficulty in deciding to use physical restraint. Nurses reported that they experience ethical dilemmas mainly in relation to the ethic principles of nonmaleficence, beneficence, and convenience. We have concluded that majority of nurses working in critical care units apply physical restraint to patients, although they are facing ethical dilemmas concerning harm and benefit principles during the application.
Grimmett, Paul E; Munch, Jean W
2009-01-01
1,4-Dioxane has been identified as a probable human carcinogen and an emerging contaminant in drinking water. The United States Environmental Protection Agency's (U.S. EPA) National Exposure Research Laboratory (NERL) has developed a method for the analysis of 1,4-dioxane in drinking water at ng/L concentrations. The method consists of an activated carbon solid-phase extraction of 500-mL or 100-mL water samples using dichloromethane as the elution solvent. The extracts are analyzed by gas chromatography-mass spectrometry (GC-MS) in selected ion monitoring (SIM) mode. In the NERL laboratory, recovery of 1,4-dioxane ranged from 94-110% in fortified laboratory reagent water and recoveries of 96-102% were demonstrated for fortified drinking water samples. The relative standard deviations for replicate analyses were less than 6% at concentrations exceeding the minimum reporting level.
NASA Astrophysics Data System (ADS)
Borys, Damian; Serafin, Wojciech; Gorczewski, Kamil; Kijonka, Marek; Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
The aim of this work was to test the most popular and essential algorithms of the intensity nonuniformity correction of the breast MRI imaging. In this type of MRI imaging, especially in the proximity of the coil, the signal is strong but also can produce some inhomogeneities. Evaluated methods of signal correction were: N3, N3FCM, N4, Nonparametric, and SPM. For testing purposes, a uniform phantom object was used to obtain test images using breast imaging MRI coil. To quantify the results, two measures were used: integral uniformity and standard deviation. For each algorithm minimum, average and maximum values of both evaluation factors have been calculated using the binary mask created for the phantom. In the result, two methods obtained the lowest values in these measures: N3FCM and N4, however, for the second method visually phantom was the most uniform after correction.
NASA Astrophysics Data System (ADS)
Felix Pereira, B.; Girish, T. E.
2004-05-01
The solar cycle variations in the characteristics of the GSE latitudinal angles of the Interplanetary Magnetic Field ($\\theta$GSE) observed near 1 AU have been studied for the period 1967-2000. It is observed that the statistical parameters mean, standard deviation, skewness and kurtosis vary with sunspot cycle. The $\\theta$GSE distribution resembles the Gaussian curve during sunspot maximum and is clearly non-Gaussian during sunspot minimum. The width of the $\\theta$GSE distribution is found to increase with sunspot activity, which is likely to depend on the occurrence of solar transients. Solar cycle variations in skewness are ordered by the solar polar magnetic field changes. This can be explained in terms of the dependence of the dominant polarity of the north-south component of IMF in the GSE system near 1 AU on the IMF sector polarity and the structure of the heliospheric current sheet.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
Missouri Minimum Standards for School Buses
ERIC Educational Resources Information Center
Nicastro, Chris L.
2008-01-01
The revised minimum standards for school bus chassis and school bus bodies have been prepared in conformity with the Revised Statutes of Missouri (RSMo) for school bus transportation. The standards recommended by the 2005 National Conference on School Transportation and the Federal Motor Vehicle Safety Standards (FMVSS) promulgated by the U. S.…
Davy, John L
2010-02-01
This paper presents a revised theory for predicting the sound insulation of double leaf cavity walls that removes an approximation, which is usually made when deriving the sound insulation of a double leaf cavity wall above the critical frequencies of the wall leaves due to the airborne transmission across the wall cavity. This revised theory is also used as a correction below the critical frequencies of the wall leaves instead of a correction due to Sewell [(1970). J. Sound Vib. 12, 21-32]. It is found necessary to include the "stud" borne transmission of the window frames when modeling wide air gap double glazed windows. A minimum value of stud transmission is introduced for use with resilient connections such as steel studs. Empirical equations are derived for predicting the effective sound absorption coefficient of wall cavities without sound absorbing material. The theory is compared with experimental results for double glazed windows and gypsum plasterboard cavity walls with and without sound absorbing material in their cavities. The overall mean, standard deviation, maximum, and minimum of the differences between experiment and theory are -0.6 dB, 3.1 dB, 10.9 dB at 1250 Hz, and -14.9 dB at 160 Hz, respectively.
Abuasbi, Falastine; Lahham, Adnan; Abdel-Raziq, Issam Rashid
2018-05-01
In this study, levels of extremely low-frequency electric and magnetic fields originated from overhead power lines were investigated in the outdoor environment in Ramallah city, Palestine. Spot measurements were applied to record fields intensities over 6-min period. The Spectrum Analyzer NF-5035 was used to perform measurements at 1 m above ground level and directly underneath 40 randomly selected power lines distributed fairly within the city. Levels of electric fields varied depending on the line's category (power line, transformer or distributor), a minimum mean electric field of 3.9 V/m was found under a distributor line, and a maximum of 769.4 V/m under a high-voltage power line (66 kV). However, results of electric fields showed a log-normal distribution with the geometric mean and the geometric standard deviation of 35.9 and 2.8 V/m, respectively. Magnetic fields measured at power lines, on contrast, were not log-normally distributed; the minimum and maximum mean magnetic fields under power lines were 0.89 and 3.5 μT, respectively. As a result, none of the measured fields exceeded the ICNIRP's guidelines recommended for general public exposures to extremely low-frequency fields.
Ilper, H; Kunz, T; Walcher, F; Zacharowski, K; Byhahn, C
2013-04-01
German emergency patients are treated by (emergency) physicians (EP). The entry level to emergency medicine differs. Manual skills experience (e. g. tracheal intubation) and knowledge of guidelines are minimum requirements. It is currently unclear who works as an EP and what medical experience he or she has. The anonymous survey was online from 10/15/2010 to 11/16/2011 and distribution was supported by leading physicians informing society members. Online networks informed independent physicians. 2091 EP took part, 1991 datasets were evaluated, 100 datasets were excluded. All results are shown as mean ± standard deviation and range (minimum - maximum). Mean age of the EP was 42 ± 8 years (26-71 years), 80 % (n = 1604) were male, 20 % (n = 387) were female. Participants finished medical school in 1997 ± 8 years (1964-2010). Base specialty during rotation was anesthesiology 59 %, internal medicine 32 %, surgery 26 %, trauma surgery/orthopedics 21 %, others 16 %. Consultants were 75 %. Main income source was answered as "hospital physician" by 77 %, "resident doctor" by 15 %, "professional emergency physician" by 7 %. The participants use a widespread chance for CME (Continuing Medical Education). The participants appear experienced in medicine and emergency medicine. They use a widespread chance for CME. Most of the participants work in anaesthesiology. © Georg Thieme Verlag KG Stuttgart · New York.
47 CFR 90.403 - General operating requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... service providers pursuant to part 20 of this chapter, each licensee must restrict all transmissions to the minimum practical transmission time and must employ an efficient operating procedure designed to... such deviation is corrected. For transmissions concerning the imminent safety-of-life or property, the...
47 CFR 90.403 - General operating requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... service providers pursuant to part 20 of this chapter, each licensee must restrict all transmissions to the minimum practical transmission time and must employ an efficient operating procedure designed to... such deviation is corrected. For transmissions concerning the imminent safety-of-life or property, the...
47 CFR 90.403 - General operating requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... service providers pursuant to part 20 of this chapter, each licensee must restrict all transmissions to the minimum practical transmission time and must employ an efficient operating procedure designed to... such deviation is corrected. For transmissions concerning the imminent safety-of-life or property, the...
47 CFR 90.403 - General operating requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... service providers pursuant to part 20 of this chapter, each licensee must restrict all transmissions to the minimum practical transmission time and must employ an efficient operating procedure designed to... such deviation is corrected. For transmissions concerning the imminent safety-of-life or property, the...
NASA Astrophysics Data System (ADS)
Leviton, Douglas B.; Madison, Timothy J.; Petrone, Peter
1998-10-01
Refractive index measurements using the minimum deviation method have been carried out for prisms of a variety of far ultraviolet optical materials used in the manufacture of Solar Blind Channel (SBC) filters for the HST Advanced Camera for Surveys (ACS). Some of the materials measured are gaining popularity in a variety of high technology applications including high power excimer lasers and advanced microlithography optics operating in a wavelength region where high quality knowledge of optical material properties is sparse yet critical. Our measurements are of unusually high accuracy and precision for this wavelength region owing to advanced instrumentation in the large vacuum chamber of the Diffraction Grating Evaluation Facility (DGEF) at Goddard Space Flight Center (GSFC) used to implement a minimum deviation method refractometer. Index values for CaF2, BaF2, LiF, and far ultraviolet grades of synthetic sapphire and synthetic fused silica are reported and compared with values from the literature.
The truly remarkable universality of half a standard deviation: confirmation through another look.
Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W
2004-10-01
In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Effect of multizone refractive multifocal contact lenses on standard automated perimetry.
Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa
2012-09-01
The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.
The volume-outcome relationship and minimum volume standards--empirical evidence for Germany.
Hentschker, Corinna; Mennicken, Roman
2015-06-01
For decades, there is an ongoing discussion about the quality of hospital care leading i.a. to the introduction of minimum volume standards in various countries. In this paper, we analyze the volume-outcome relationship for patients with intact abdominal aortic aneurysm and hip fracture. We define hypothetical minimum volume standards in both conditions and assess consequences for access to hospital services in Germany. The results show clearly that patients treated in hospitals with a higher case volume have on average a significant lower probability of death in both conditions. Furthermore, we show that the hypothetical minimum volume standards do not compromise overall access measured with changes in travel times. Copyright © 2014 John Wiley & Sons, Ltd.
Two-point method uncertainty during control and measurement of cylindrical element diameters
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Emmanuel, Samson; Shantaram, Kulkarni; Sushil, Kumar C; Manoj, Likhitkar
2013-04-01
Success of non-surgical root canal treatment is predicted by meticulous cleaning and shaping of the root canal system, three-dimensional obturation and a well-fitting "leakage-free" coronal restoration. The techniques of obturation that are available have their own relative position in the historical development of filling techniques. Over the years, pitfalls with one technique have often led to the development of newer methods of obturation, along with the recognition that no one method of obturation may satisfy all clinical cases. A total of 120 extracted human permanent anterior maxillary and mandibular single rooted teeth were selected for the present study and divided into 3 groups based on the method of obturation technique. Following the preparation the patency at the apical foramen was confirmed by passing a file #15. After obturation of all three groups, teeth were immersed in 1% of aqueous methylene blue dye for a period of two weeks and then samples were subjected to spectrophotometric analysis. The present study was conducted to evaluate in vitro the spectrophotometric analysis to quantitatively analyze relative amount of dye penetration using lateral condensation (Group I), Obtura II (Group II ), Thermafil obturating technique (Group III) with ZOE sealer used in all groups. Teeth obturated with lateral condensation (Group I) shows mean value of 0.0243 and standard deviation of 0.0056. The Group II thermoplasticized injectable moulded Guttapercha (Obtura II) showed 0.0239 mean and standard deviation value of 0.0045 and Group III Thermafil obturation technique shows 0.0189 as mean value and 0.0035 standard deviation values. Following conclusion was drawn from the present study. Group III i.e., Thermafil obturating technique shows minimum mean apical dye penetration compared to Group II (ObturaII) and Group I (lateral condensation).Lateral condensation shows maximum mean apical dye penetration in all three groups.There is no significant difference between the apical dye penetration of lateral condensation and Obtura II. Obturation, lateral condensation, Obtura II, Thermafil, Spectrophotometer, dye penetration. How to cite this article: Samson E, Kulkarni S, Sushil K C, Likhitkar M. An In-Vitro Evaluation and Comparison of Apical Sealing Ability of Three Different Obturation Technique - Lateral Condensation, Obtura II, and Thermafil. J Int Oral Health 2013; 5(2):35-43.
Barker, C.E.; Pawlewicz, M.J.
1993-01-01
In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv-r measurements on DOM. The coefficient of variation (V = standard deviation/mean) is proposed as a statistic to indicate the reliability of the mean Rv-r estimates made at n ??? 20. This preliminary study suggests a V 0.2 suggests an unreliable mean in such small samples. ?? 1993.
Stenzel, O; Wilbrandt, S; Wolf, J; Schürmann, M; Kaiser, N; Ristau, D; Ehlers, H; Carstens, F; Schippel, S; Mechold, L; Rauhut, R; Kennedy, M; Bischoff, M; Nowitzki, T; Zöller, A; Hagedorn, H; Reus, H; Hegemann, T; Starke, K; Harhausen, J; Foest, R; Schumacher, J
2017-02-01
Random effects in the repeatability of refractive index and absorption edge position of tantalum pentoxide layers prepared by plasma-ion-assisted electron-beam evaporation, ion beam sputtering, and magnetron sputtering are investigated and quantified. Standard deviations in refractive index between 4*10-4 and 4*10-3 have been obtained. Here, lowest standard deviations in refractive index close to our detection threshold could be achieved by both ion beam sputtering and plasma-ion-assisted deposition. In relation to the corresponding mean values, the standard deviations in band-edge position and refractive index are of similar order.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
Predicting the Earth encounters of (99942) Apophis
NASA Technical Reports Server (NTRS)
Giorgini, Jon D.; Benner, Lance A. M.; Ostro, Steven J.; Nolan, Michael C.; Busch, Michael W.
2007-01-01
Arecibo delay-Doppler measurements of (99942) Apophis in 2005 and 2006 resulted in a five standard-deviation trajectory correction to the optically predicted close approach distance to Earth in 2029. The radar measurements reduced the volume of the statistical uncertainty region entering the encounter to 7.3% of the pre-radar solution, but increased the trajectory uncertainty growth rate across the encounter by 800% due to the closer predicted approach to the Earth. A small estimated Earth impact probability remained for 2036. With standard-deviation plane-of-sky position uncertainties for 2007-2010 already less than 0.2 arcsec, the best near-term ground-based optical astrometry can only weakly affect the trajectory estimate. While the potential for impact in 2036 will likely be excluded in 2013 (if not 2011) using ground-based optical measurements, approximations within the Standard Dynamical Model (SDM) used to estimate and predict the trajectory from the current era are sufficient to obscure the difference between a predicted impact and a miss in 2036 by altering the dynamics leading into the 2029 encounter. Normal impact probability assessments based on the SDM become problematic without knowledge of the object's physical properties; impact could be excluded while the actual dynamics still permit it. Calibrated position uncertainty intervals are developed to compensate for this by characterizing the minimum and maximum effect of physical parameters on the trajectory. Uncertainty in accelerations related to solar radiation can cause between 82 and 4720 Earth-radii of trajectory change relative to the SDM by 2036. If an actionable hazard exists, alteration by 2-10% of Apophis' total absorption of solar radiation in 2018 could be sufficient to produce a six standard-deviation trajectory change by 2036 given physical characterization; even a 0.5% change could produce a trajectory shift of one Earth-radius by 2036 for all possible spin-poles and likely masses. Planetary ephemeris uncertainties are the next greatest source of systematic error, causing up to 23 Earth-radii of uncertainty. The SDM Earth point-mass assumption introduces an additional 2.9 Earth-radii of prediction error by 2036. Unmodeled asteroid perturbations produce as much as 2.3 Earth-radii of error. We find no future small-body encounters likely to yield an Apophis mass determination prior to 2029. However, asteroid (144898) 2004 VD17, itself having a statistical Earth impact in 2102, will probably encounter Apophis at 6.7 lunar distances in 2034, their uncertainty regions coming as close as 1.6 lunar distances near the center of both SDM probability distributions.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true Payment of minimum wage specified in section 6(a)(1) of the... and Procedures § 4.2 Payment of minimum wage specified in section 6(a)(1) of the Fair Labor Standards... employees shall pay any employees engaged in such work less than the minimum wage specified in section 6(a...
Introducing the Mean Absolute Deviation "Effect" Size
ERIC Educational Resources Information Center
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-18
... Document--Draft DO-XXX, Minimum Aviation Performance Standards (MASPS) for an Enhanced Flight Vision System... Discussion (9:00 a.m.-5:00 p.m.) Provide Comment Resolution of Document--Draft DO-XXX, Minimum Aviation.../Approve FRAC Draft for PMC Consideration--Draft DO- XXX, Minimum Aviation Performance Standards (MASPS...
2013 Missouri Minimum Standards for School Buses
ERIC Educational Resources Information Center
Nicastro, Chris L.
2012-01-01
The revised minimum standards for school bus chassis and school bus bodies have been prepared in conformity with the Revised Statutes of Missouri (RSMo) for school bus transportation. The standards recommended by the 2010 National Conference on School Transportation and the Federal Motor Vehicle Safety Standards (FMVSS) promulgated by the U. S.…
Hopper, John L.
2015-01-01
How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
Monitor unit settings for intensity modulated beams delivered using a step-and-shoot approach.
Sharpe, M B; Miller, B M; Yan, D; Wong, J W
2000-12-01
Two linear accelerators have been commissioned for delivering IMRT treatments using a step-and-shoot approach. To assess beam startup stability for 6 and 18 MV x-ray beams, dose delivered per monitor unit (MU), beam flatness, and beam symmetry were measured as a function of the total number of MU delivered at a clinical dose rate of 400 MU per minute. Relative to a 100 MU exposure, the dose delivered per MU by both linear accelerators was found to be within +/-2% for exposures larger than 4 MU. Beam flatness and symmetry also met accepted quality assurance standards for a minimum exposure of 4 MU. We have found that the performance of the two machines under study is well suited to the delivery of step-and-shoot IMRT. A system of dose calculation has also been commissioned for applying head scatter corrections to fields as small as 1x1 cm2. The accuracy and precision of the relative output calculations in water was validated for small fields and fields offset from the axis of collimator rotation. For both 6 and 18 MV x-ray beams, the dose per MU calculated in a water phantom agrees with measured data to within 1% on average, with a maximum deviation of 2.5%. The largest output factor discrepancies were seen when the actual radiation field size deviated from the set field size. The measured output in water can vary by as much 16% for 1x1 cm2 fields, when the measured field size deviates from the set field size by 2 mm. For a 1 mm deviation, this discrepancy was reduced to 8%. Steps should be taken to ensure collimator precision is tightly controlled when using such small fields. If this is not possible, very small fields should not contribute to a significant portion of the treatment, or uncertainties in the collimator position may effect the accuracy of the dose delivered.
Tensho, Keiji; Shimodaira, Hiroki; Akaoka, Yusuke; Koyama, Suguru; Hatanaka, Daisuke; Ikegami, Shota; Kato, Hiroyuki; Saito, Naoto
2018-05-02
The tibial tubercle deviation associated with recurrent patellar dislocation (RPD) has not been studied sufficiently. New methods of evaluation were used to verify the extent of tubercle deviation in a group with patellar dislocation compared with that in a control group, the frequency of patients who demonstrated a cutoff value indicating that tubercle transfer was warranted on the basis of the control group distribution, and the validity of these methods of evaluation for diagnosing RPD. Sixty-six patients with a history of patellar dislocation (single in 19 [SPD group] and recurrent in 47 [RPD group]) and 66 age and sex-matched controls were analyzed with the use of computed tomography (CT). The tibial tubercle-posterior cruciate ligament (TT-PCL) distance, TT-PCL ratio, and tibial tubercle lateralization (TTL) in the SPD and RPD groups were compared with those in the control group. Cutoff values to warrant 10 mm of transfer were based on either the minimum or -2SD (2 standard deviations below the mean) value in the control group, and the prevalences of patients in the RPD group with measurements above these cutoff values were calculated. The area under the curve (AUC) in receiver operating characteristic (ROC) curve analysis was used to assess the effectiveness of the measurements as predictors of RPD. The mean TT-PCL distance, TT-PCL ratio, and TTL were all significantly greater in the RPD group than in the control group. The numbers of patients in the RPD group who satisfied the cutoff criteria when they were based on the minimum TT-PCL distance, TT-PCL ratio, and TTL in the control group were 11 (23%), 7 (15%), and 6 (13%), respectively. When the cutoff values were based on the -2SD values in the control group, the numbers of patients were 8 (17%), 6 (13%), and 0, respectively. The AUC of the ROC curve for TT-PCL distance, TT-PCL ratio, and TTL was 0.66, 0.72, and 0.72, respectively. The extent of TTL in the RPD group was not substantial, and the percentages of patients for whom 10 mm of medial transfer was indicated were small. Prognostic Level III. See Instructions for Authors for a complete description of levels of evidence.
Development of minimum standards for hardwoods used in producing underground coal mine timbers
Floyd G. Timson
1978-01-01
This note presents minimum standards for raw material used in the production of sawn, split, and round timbers for the underground mining industry. The standards are based on a summary of information gathered from many mine-timber producers.
Impact of HIPAA’s Minimum Necessary Standard on Genomic Data Sharing
Evans, Barbara J.; Jarvik, Gail P.
2017-01-01
Purpose This article provides a brief introduction to the HIPAA Privacy Rule’s minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. Methods This research used the Thomson Reuters Westlaw™ database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data-sharing. We considered relevant example cases of genomic data-sharing needs. Results In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized—whether for research, public health, or clinical interpretation and medical practice support—affects how the minimum necessary standard applies and its overall impact on data access and use. Conclusion There is no clear regulatory guidance on how to apply HIPAA’s minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy-makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows. PMID:28914268
Impact of HIPAA's minimum necessary standard on genomic data sharing.
Evans, Barbara J; Jarvik, Gail P
2018-04-01
This article provides a brief introduction to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule's minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. This research used the Thomson Reuters Westlaw database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data sharing. We considered relevant example cases of genomic data-sharing needs. In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized-whether for research, public health, or clinical interpretation and medical practice support-affects how the minimum necessary standard applies and its overall impact on data access and use. There is no clear regulatory guidance on how to apply HIPAA's minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Estimating maize water stress by standard deviation of canopy temperature in thermal imagery
USDA-ARS?s Scientific Manuscript database
A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...
The Need for Higher Minimum Staffing Standards in U.S. Nursing Homes
Harrington, Charlene; Schnelle, John F.; McGregor, Margaret; Simmons, Sandra F.
2016-01-01
Many U.S. nursing homes have serious quality problems, in part, because of inadequate levels of nurse staffing. This commentary focuses on two issues. First, there is a need for higher minimum nurse staffing standards for U.S. nursing homes based on multiple research studies showing a positive relationship between nursing home quality and staffing and the benefits of implementing higher minimum staffing standards. Studies have identified the minimum staffing levels necessary to provide care consistent with the federal regulations, but many U.S. facilities have dangerously low staffing. Second, the barriers to staffing reform are discussed. These include economic concerns about costs and a focus on financial incentives. The enforcement of existing staffing standards has been weak, and strong nursing home industry political opposition has limited efforts to establish higher standards. Researchers should study the ways to improve staffing standards and new payment, regulatory, and political strategies to improve nursing home staffing and quality. PMID:27103819
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Emergence of the significant local warming of Korea in CMIP5 projections
NASA Astrophysics Data System (ADS)
Boo, Kyung-On; Shim, Sungbo; Kim, Jee-Eun
2016-04-01
According to IPCC AR5, anthropogenic influence on warming is obvious in local scales, especially in some tropical regions. Detection of significant local warming is important for adaptation to climate change of society and ecosystem. Recently much attention has focused on the time of emergence (ToE) for the signal of anthropogenic climate change against the natural climate variability. Motivated from the previous studies, this study analyzes ToE of regional surface air temperature over Korea. Simulations of CMIP5 15 models are used for RCP 2.6, 4.5 and 8.5. For each year, JJA and DJF temperature anomalies are calculated for the time period 1900-1929. For noise of interannual variability, natural-only historical simulations of CMIP5 12 models are used and the standard deviation of the time series is obtained. For signal of warming, we examine the year when the signal above 2 standard deviations is detected in 80% of the models using 30-year smoothed time series. According to our results, interannual variability is larger in land than ocean. Seasonally, it is larger in winter than in summer. Accordingly, ToE of summertime temperature is earlier than that in winter and is expected to appear in 2030s from three RCPs. The seasonal difference is consistent with previous studies. Wintertime ToE appears in 2040s for RCP85 and 2060s for RCP4.5. The different emergence time between RCP8.5 and RCP4.5 reflects the influence of mitigation. In a similar way, daily maximum and minimum temperatures are analyzed. ToE of Tmin appears earlier than that of Tmax and difference is small. Acknowledgements. This study is supported by the National Institute of Meteorological Sciences, Korea Meteorological Administration (NIMR-2012-B-2).
Third molar development by measurements of open apices in an Italian sample of living subjects.
De Luca, Stefano; Pacifici, Andrea; Pacifici, Luciano; Polimeni, Antonella; Fischetto, Sara Giulia; Velandia Palacio, Luz Andrea; Vanin, Stefano; Cameriere, Roberto
2016-02-01
The aim of this study is to analyse the age-predicting performance of third molar index (I3M) in dental age estimation. A multiple regression analysis was developed with chronological age as the independent variable. In order to investigate the relationship between the I3M and chronological age, the standard deviation and relative error were examined. Digitalized orthopantomographs (OPTs) of 975 Italian healthy subjects (531 female and 444 male), aged between 9 and 22 years, were studied. Third molar development was determined according to Cameriere et al. (2008). Analysis of covariance (ANCOVA) was applied to study the interaction between I3M and the gender. The difference between age and third molar index (I3M) was tested with Pearson's correlation coefficient. The I3M, the age and the gender of the subjects were used as predictive variable for age estimation. The small F-value for the gender (F = 0.042, p = 0.837) reveals that this factor does not affect the growth of the third molar. Adjusted R(2) (AdjR(2)) was used as parameter to define the best fitting function. All the regression models (linear, exponential, and polynomial) showed a similar AdjR(2). The polynomial (2nd order) fitting explains about the 78% of the total variance and do not add any relevant clinical information to the age estimation process from the third molar. The standard deviation and relative error increase with the age. The I3M has its minimum in the younger group of studied individuals and its maximum in the oldest ones, indicating that its precision and reliability decrease with the age. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Gwynne-Jones, David P; Hutton, Liam R; Stout, Kirsten M; Abbott, J Haxby
2018-04-01
There are increasing problems with access to both outpatient assessment and joint replacement surgery for patients with hip or knee osteoarthritis. Data were collected on all patients seen at the Joint Clinic over a 2-year period with minimum 12-month follow-up. Patients were assessed by a nurse and a physiotherapist, baseline scores and demographic details collected, and an individualized personal care plan developed. Patients could be referred for a first specialist assessment (FSA) if their severity justified surgical assessment. Three hundred fifty-eight patients were seen at Joint Clinic, of whom 150 (44%) had hip and 189 (56%) had knee OA. The mean age was 67.4 years and there were 152 men (45%) and 187 women (55%). The mean baseline Oxford score was 19.8 (standard deviation 8.2). Fifty-four patients were referred directly to FSA (mean Oxford score 13.0, standard deviation 6.7) and 89 after a subsequent review. The scores of patients referred for FSA were significantly worse than those managed in the Joint Clinic (P < .001). Of the 143 referred for FSA, 115 underwent or were awaiting surgery, 18 were recommended surgery but scored below prioritization threshold, and 10 were not recommended surgery. The Oxford scores of the 194 patients managed non-operatively improved from 22.0 to 25.0 (P = .0013). This study shows that the Joint Clinic was effective as a triage tool with 93% of those referred for FSA being recommended surgery. This has freed up surgeon time to see only those patients most in need of surgical assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
Kim, Hye Jeong; Kwak, Mi Kyung; Choi, In Ho; Jin, So-Young; Park, Hyeong Kyu; Byun, Dong Won; Suh, Kyoil; Yoo, Myung Hi
2018-02-23
The aim of this study was to address the role of the elasticity index as a possible predictive marker for detecting papillary thyroid carcinoma (PTC) and quantitatively assess shear wave elastography (SWE) as a tool for differentiating PTC from benign thyroid nodules. One hundred and nineteen patients with thyroid nodules undergoing SWE before ultrasound-guided fine needle aspiration and core needle biopsy were analyzed. The mean (EMean), minimum (EMin), maximum (EMax), and standard deviation (ESD) of SWE elasticity indices were measured. Among 105 nodules, 14 were PTC and 91 were benign. The EMean, EMin, and EMax values were significantly higher in PTCs than benign nodules (EMean 37.4 in PTC vs. 23.7 in benign nodules, p = 0.005; EMin 27.9 vs. 17.8, p = 0.034; EMax 46.7 vs. 31.5, p < 0.001). The EMean, EMin, and EMax were significantly associated with PTC with diagnostic odds ratios varying from 6.74 to 9.91, high specificities (86.4%, 86.4%, and 88.1%, respectively), and positive likelihood ratios (4.21, 3.69, and 4.82, respectively). The ESD values were significantly higher in PTC than in benign nodules (6.3 vs. 2.6, p < 0.001). ESD had the highest specificity (96.6%) when applied with a cut-off value of 6.5 kPa. It had a positive likelihood ratio of 14.75 and a diagnostic odds ratio of 28.50. The shear elasticity index of ESD, with higher likelihood ratios for PTC, will probably identify nodules that have a high potential for malignancy. It may help to identify and select malignant nodules, while reducing unnecessary fine needle aspiration and core needle biopsies of benign nodules.
Crovelli, R.A.; Balay, R.H.
1991-01-01
A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Standards for determination of appropriate individual institution minimum capital ratios. 615.5351 Section 615.5351 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL AFFAIRS, LOAN POLICIES AND OPERATIONS, AND FUNDING OPERATIONS Establishment of Minimum Capital...
YALE NATURAL RADIOCARBON MEASUREMENTS. PART VI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuiver, M.; Deevey, E.S.
1961-01-01
Most of the measurements made since publication of Yale V are included; some measurements, such as a series collected in Greenland, are withneld pending additional information or field work that will make better interpretations possible. In addition to radiocarbon dates of geologic and/or archaeologic interest, recent assays are given of C/sup 14/ in lake waters and other lacustrine materials, now normalized for C/sup 13/ content. The newly accepted convention is followed in expressing normalized C/sup 14/ values as DELTA = delta C/sup 14/ (2 delta C/sup 13/ + 50)STAl + ( delta C/sup 14//1000)! where DELTA is the per milmore » deviation of the C/sup 14/ if the sample from any contemporary standard (whether organic or a carbonate) after correction of sample and/or standard for real age, for the Suess effect, for normal isotopic fractionation, and for deviations of C/sup 14/ content of the age- and pollution- corrected l9th-century wood standard from that of 95% of the NBS oxalic acid standard; delta C/sup 14/ is the measured deviation from 95% of the NBS standard, and delta C/sup 13/ is the deviation from the NBS limestone standard, both in per mil. These assays are variously affected by artificial C/sup 14/ resulting from nuclear tests. (auth)« less
Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio
2014-06-01
Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P < 0.01, 0.05, and 0.001, respectively) decreased in women with climacteric symptoms. There was a negative correlation between the standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.
10 CFR 600.341 - Monitoring and reporting program and financial performance.
Code of Federal Regulations, 2012 CFR
2012-01-01
... will be taken to address the deviations. (2) A final technical report if the award is for research and... dates for reports. At a minimum, requirements must include: (1) Periodic progress reports (at least... follows: (i) The program portions of the reports must address progress toward achieving program...
10 CFR 600.341 - Monitoring and reporting program and financial performance.
Code of Federal Regulations, 2014 CFR
2014-01-01
... will be taken to address the deviations. (2) A final technical report if the award is for research and... dates for reports. At a minimum, requirements must include: (1) Periodic progress reports (at least... follows: (i) The program portions of the reports must address progress toward achieving program...
10 CFR 600.341 - Monitoring and reporting program and financial performance.
Code of Federal Regulations, 2013 CFR
2013-01-01
... will be taken to address the deviations. (2) A final technical report if the award is for research and... dates for reports. At a minimum, requirements must include: (1) Periodic progress reports (at least... follows: (i) The program portions of the reports must address progress toward achieving program...
10 CFR 600.341 - Monitoring and reporting program and financial performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... will be taken to address the deviations. (2) A final technical report if the award is for research and... dates for reports. At a minimum, requirements must include: (1) Periodic progress reports (at least... follows: (i) The program portions of the reports must address progress toward achieving program...
77 FR 60625 - Minimum Internal Control Standards for Class II Gaming
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
...-37 Minimum Internal Control Standards for Class II Gaming AGENCY: National Indian Gaming Commission... Internal Control Standards that were published on September 21, 2012. DATES: The effective date [email protected] . FOR FURTHER INFORMATION CONTACT: Jennifer Ward, Attorney, NIGC Office of General Counsel, at...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.
A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less
NASA Astrophysics Data System (ADS)
Gautam, Girish Dutt; Pandey, Arun Kumar
2018-03-01
Kevlar is the most popular aramid fiber and most commonly used in different technologically advanced industries for various applications. But the precise cutting of Kevlar composite laminates is a difficult task. The conventional cutting methods face various defects such as delamination, burr formation, fiber pullout with poor surface quality and their mechanical performance is greatly affected by these defects. The laser beam machining may be an alternative of the conventional cutting processes due to its non-contact nature, requirement of low specific energy with higher production rate. But this process also faces some problems that may be minimized by operating the machine at optimum parameters levels. This research paper examines the effective utilization of the Nd:YAG laser cutting system on difficult-to-cut Kevlar-29 composite laminates. The objective of the proposed work is to find the optimum process parameters settings for getting the minimum kerf deviations at both sides. The experiments have been conducted on Kevlar-29 composite laminates having thickness 1.25 mm by using Box-Benkhen design with two center points. The experimental data have been used for the optimization by using the proposed methodology. For the optimization, Teaching learning Algorithm based approach has been employed to obtain the minimum kerf deviation at bottom and top sides. A self coded Matlab program has been developed by using the proposed methodology and this program has been used for the optimization. Finally, the confirmation tests have been performed to compare the experimental and optimum results obtained by the proposed methodology. The comparison results show that the machining performance in the laser beam cutting process has been remarkably improved through proposed approach. Finally, the influence of different laser cutting parameters such as lamp current, pulse frequency, pulse width, compressed air pressure and cutting speed on top kerf deviation and bottom kerf deviation during the Nd:YAG laser cutting of Kevlar-29 laminates have been discussed.
Selection and Classification Using a Forecast Applicant Pool.
ERIC Educational Resources Information Center
Hendrix, William H.
The document presents a forecast model of the future Air Force applicant pool. By forecasting applicants' quality (means and standard deviations of aptitude scores) and quantity (total number of applicants), a potential enlistee could be compared to the forecasted pool. The data used to develop the model consisted of means, standard deviation, and…
Assessment of the NASA Flight Assurance Review Program
NASA Technical Reports Server (NTRS)
Holmes, J.; Pruitt, G.
1983-01-01
The NASA flight assurance review program to develop minimum standard guidelines for flight assurance reviews was assessed. Documents from NASA centers and NASA headquarters to determine current design review practices and procedures were evaluated. Six reviews were identified for the recommended minimum. The practices and procedures used at the different centers to incorporate the most effective ones into the minimum standard review guidelines were analyzed and guidelines for procedures, personnel and responsibilies, review items/data checklist, and feedback and closeout were defined. The six recommended reviews and the minimum standards guidelines developed for flight assurance reviews are presented. Observations and conclusions for further improving the NASA review and quality assurance process are outlined.
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Wavelength selection method with standard deviation: application to pulse oximetry.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
2011-07-01
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
Havelaar, Arie H; Vazquez, Kathleen M; Topalcengiz, Zeynal; Muñoz-Carpena, Rafael; Danyluk, Michelle D
2017-10-09
The U.S. Food and Drug Administration (FDA) has defined standards for the microbial quality of agricultural surface water used for irrigation. According to the FDA produce safety rule (PSR), a microbial water quality profile requires analysis of a minimum of 20 samples for Escherichia coli over 2 to 4 years. The geometric mean (GM) level of E. coli should not exceed 126 CFU/100 mL, and the statistical threshold value (STV) should not exceed 410 CFU/100 mL. The water quality profile should be updated by analysis of a minimum of five samples per year. We used an extensive set of data on levels of E. coli and other fecal indicator organisms, the presence or absence of Salmonella, and physicochemical parameters in six agricultural irrigation ponds in West Central Florida to evaluate the empirical and theoretical basis of this PSR. We found highly variable log-transformed E. coli levels, with standard deviations exceeding those assumed in the PSR by up to threefold. Lognormal distributions provided an acceptable fit to the data in most cases but may underestimate extreme levels. Replacing censored data with the detection limit of the microbial tests underestimated the true variability, leading to biased estimates of GM and STV. Maximum likelihood estimation using truncated lognormal distributions is recommended. Twenty samples are not sufficient to characterize the bacteriological quality of irrigation ponds, and a rolling data set of five samples per year used to update GM and STV values results in highly uncertain results and delays in detecting a shift in water quality. In these ponds, E. coli was an adequate predictor of the presence of Salmonella in 150-mL samples, and turbidity was a second significant variable. The variability in levels of E. coli in agricultural water was higher than that anticipated when the PSR was finalized, and more detailed information based on mechanistic modeling is necessary to develop targeted risk management strategies.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-03
... Committee 147, Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Traffic Alert and Collision Avoidance Systems Airborne Equipment. SUMMARY: The FAA is issuing this notice... Performance Standards for Traffic Alert and Collision Avoidance Systems Airborne Equipment. DATES: The meeting...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-05
... Committee 147, Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Traffic Alert and Collision Avoidance Systems Airborne Equipment. SUMMARY: The FAA is issuing this notice... Performance Standards for Traffic Alert and Collision Avoidance Systems Airborne Equipment. DATES: The meeting...
Code of Federal Regulations, 2011 CFR
2011-04-01
... or tribal organization's financial management system contain to meet these standards? 900.45 Section... ASSISTANCE ACT Standards for Tribal or Tribal Organization Management Systems Standards for Financial Management Systems § 900.45 What specific minimum requirements shall an Indian tribe or tribal organization's...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard VII-Elementary instructional program. 36.22 Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Standard VI-Kindergarten instructional program. 36.21 Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
Minimum Essential Requirements and Standards in Medical Education.
ERIC Educational Resources Information Center
Wojtczak, Andrzej; Schwarz, M. Roy
2000-01-01
Reviews the definition of standards in general, and proposes a definition of standards and global minimum essential requirements for use in medical education. Aims to serve as a tool for the improvement of quality and international comparisons of basic medical programs. Explains the IIME (Institute for International Medical Education) project…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-30
... Committee 147, Minimum Operational Performance Standards for Traffic Alert and Collision Avoidance Systems... Traffic Alert and Collision Avoidance Systems Airborne Equipment. SUMMARY: The FAA is issuing this notice... Performance Standards for Traffic Alert and Collision Avoidance Systems Airborne Equipment. DATES: The meeting...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard VII-Elementary instructional program. 36.22 Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard VI-Kindergarten instructional program. 36.21 Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.21 - Standard VI-Kindergarten instructional program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Standard VI-Kindergarten instructional program. 36.21 Section 36.21 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 36.22 - Standard VII-Elementary instructional program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Standard VII-Elementary instructional program. 36.22 Section 36.22 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY SITUATIONS Minimum...
25 CFR 63.12 - What are minimum standards of character?
Code of Federal Regulations, 2010 CFR
2010-04-01
... PROTECTION AND FAMILY VIOLENCE PREVENTION Minimum Standards of Character and Suitability for Employment § 63... guilty to any offense under Federal, state, or tribal law involving crimes of violence, sexual assault...
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Rigby, Jane Rebecca; Malhotra, Sangeeta; Allam, Sahar; Carilli, Chris; Combes, Francoise; Finkelstein, Keely; Finkelstein, Steven; Frye, Brenda; Gerin, Maryvonne;
2014-01-01
We report on two regularly rotating galaxies at redshift z approx. = 2, using high-resolution spectra of the bright [C microns] 158 micrometers emission line from the HIFI instrument on the Herschel Space Observatory. Both SDSS090122.37+181432.3 ("S0901") and SDSSJ120602.09+514229.5 ("the Clone") are strongly lensed and show the double-horned line profile that is typical of rotating gas disks. Using a parametric disk model to fit the emission line profiles, we find that S0901 has a rotation speed of v sin(i) approx. = 120 +/- 7 kms(sup -1) and a gas velocity dispersion of (standard deviation)g < 23 km s(sup -1) (1(standard deviation)). The best-fitting model for the Clone is a rotationally supported disk having v sin(i) approx. = 79 +/- 11 km s(sup -1) and (standard deviation)g 4 kms(sup -1) (1(standard deviation)). However, the Clone is also consistent with a family of dispersion-dominated models having (standard deviation)g = 92 +/- 20 km s(sup -1). Our results showcase the potential of the [C microns] line as a kinematic probe of high-redshift galaxy dynamics: [C microns] is bright, accessible to heterodyne receivers with exquisite velocity resolution, and traces dense star-forming interstellar gas. Future [C microns] line observations with ALMA would offer the further advantage of spatial resolution, allowing a clearer separation between rotation and velocity dispersion.
Renz, Erik; Hackney, Madeleine; Hall, Courtney
2016-01-01
Intraocular lenses (IOLs) provide distance and near refraction and are becoming the standard for cataract surgery. Multifocal glasses increase variability of toe clearance in older adults navigating stairs and increase fall risk; however, little is known about the biomechanics of stair navigation in individuals with multifocal IOLs. This study compared clearance while ascending and descending stairs in individuals with monofocal versus multifocal IOLs. Eight participants with multifocal IOLs (4 men, 4 women; mean age = 66.5 yr, standard deviation [SD] = 6.26) and fifteen male participants with monofocal IOLs (mean age = 69.9 yr, SD = 6.9) underwent vision and mobility testing. Motion analysis recorded kinematic and custom software-calculated clearances in three-dimensional space. No significant differences were found between groups on minimum clearance or variability. Clearance differed for ascending versus descending stairs: the first step onto the stair had the greatest toe clearance during ascent, whereas the final step to the floor had the greatest heel clearance during descent. This preliminary study indicates that multifocal IOLs have similar biomechanic characteristics to monofocal IOLs. Given that step characteristics are related to fall risk, we can tentatively speculate that multifocal IOLs may carry no additional fall risk.
[Aquatic Ecological Index based on freshwater (ICE(RN-MAE)) for the Rio Negro watershed, Colombia].
Forero, Laura Cristina; Longo, Magnolia; John Jairo, Ramirez; Guillermo, Chalar
2014-04-01
Aquatic Ecological Index based on freshwater (ICE(RN-MAE)) for the Rio Negro watershed, Colombia. Available indices to assess the ecological status of rivers in Colombia are mostly based on subjective hypotheses about macroinvertebrate tolerance to pollution, which have important limitations. Here we present the application of a method to establish an index of ecological quality for lotic systems in Colombia. The index, based on macroinvertebrate abundance and physicochemical variables, was developed as an alternative to the BMWP-Col index. The method consists on determining an environmental gradient from correlations between physicochemical variables and abundance. The scores obtained in each sampling point are used in a standardized correlation for a model of weighted averages (WA). In the WA model abundances are also weighted to estimate the optimum and tolerance values of each taxon; using this information we estimated the index of ecological quality based also on macroinvertebrate (ICE(RN-MAE)) abundance in each sampling site. Subsequently, we classified all sites using the index and concentrations of total phosphorus (TP) in a cluster analysis. Using TP and ICE(RN-MAE), mean, maximum, minimum and standard deviation, we defined threshold values corresponding to three categories of ecological status: good, fair and critical.
Amberg, Alexander; Barrett, Dave; Beale, Michael H.; Beger, Richard; Daykin, Clare A.; Fan, Teresa W.-M.; Fiehn, Oliver; Goodacre, Royston; Griffin, Julian L.; Hankemeier, Thomas; Hardy, Nigel; Harnly, James; Higashi, Richard; Kopka, Joachim; Lane, Andrew N.; Lindon, John C.; Marriott, Philip; Nicholls, Andrew W.; Reily, Michael D.; Thaden, John J.; Viant, Mark R.
2013-01-01
There is a general consensus that supports the need for standardized reporting of metadata or information describing large-scale metabolomics and other functional genomics data sets. Reporting of standard metadata provides a biological and empirical context for the data, facilitates experimental replication, and enables the re-interrogation and comparison of data by others. Accordingly, the Metabolomics Standards Initiative is building a general consensus concerning the minimum reporting standards for metabolomics experiments of which the Chemical Analysis Working Group (CAWG) is a member of this community effort. This article proposes the minimum reporting standards related to the chemical analysis aspects of metabolomics experiments including: sample preparation, experimental analysis, quality control, metabolite identification, and data pre-processing. These minimum standards currently focus mostly upon mass spectrometry and nuclear magnetic resonance spectroscopy due to the popularity of these techniques in metabolomics. However, additional input concerning other techniques is welcomed and can be provided via the CAWG on-line discussion forum at http://msi-workgroups.sourceforge.net/ or http://Msi-workgroups-feedback@lists.sourceforge.net. Further, community input related to this document can also be provided via this electronic forum. PMID:24039616
28 CFR 50.24 - Annuity broker minimum qualifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Annuity broker minimum qualifications. 50....24 Annuity broker minimum qualifications. (a) Minimum standards. The Civil Division, United States Department of Justice, shall establish a list of annuity brokers who meet minimum qualifications for...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, D; Meier, J; Mawlawi, O
Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.
2012-09-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
[Minimum Standards for the Spatial Accessibility of Primary Care: A Systematic Review].
Voigtländer, S; Deiters, T
2015-12-01
Regional disparities of access to primary care are substantial in Germany, especially in terms of spatial accessibility. However, there is no legally or generally binding minimum standard for the spatial accessibility effort that is still acceptable. Our objective is to analyse existing minimum standards, the methods used as well as their empirical basis. A systematic literature review was undertaken of publications regarding minimum standards for the spatial accessibility of primary care based on a title word and keyword search using PubMed, SSCI/Web of Science, EMBASE and Cochrane Library. 8 minimum standards from the USA, Germany and Austria could be identified. All of them specify the acceptable spatial accessibility effort in terms of travel time; almost half include also distance(s). The travel time maximum, which is acceptable, is 30 min and it tends to be lower in urban areas. Primary care is, according to the identified minimum standards, part of the local area (Nahbereich) of so-called central places (Zentrale Orte) providing basic goods and services. The consideration of means of transport, e. g. public transport, is heterogeneous. The standards are based on empirical studies, consultation with service providers, practical experiences, and regional planning/central place theory as well as on legal or political regulations. The identified minimum standards provide important insights into the effort that is still acceptable regarding spatial accessibility, i. e. travel time, distance and means of transport. It seems reasonable to complement the current planning system for outpatient care, which is based on provider-to-population ratios, by a gravity-model method to identify places as well as populations with insufficient spatial accessibility. Due to a lack of a common minimum standard we propose - subject to further discussion - to begin with a threshold based on the spatial accessibility limit of the local area, i. e. 30 min to the next primary care provider for at least 90% of the regional population. The exceeding of the threshold would necessitate a discussion of a health care deficit and in line with this a potential need for intervention, e. g. in terms of alternative forms of health care provision. © Georg Thieme Verlag KG Stuttgart · New York.
Tracking of white-tailed deer migration by Global Positioning System
Nelson, M.E.; Mech, L.D.; Frame, P.F.
2004-01-01
Based on global positioning system (GPS) radiocollars in northeastern Minnesota, deer migrated 23-45 km in spring during 31-356 h, deviating a maximum 1.6-4.0 km perpendicular from a straight line of travel between their seasonal ranges. They migrated a minimum of 2.1-18.6 km/day over 11-56 h during 2-14 periods of travel. Minimum travel during 1-h intervals averaged 1.5 km/h. Deer paused 1-12 times, averaging 24 h/pause. Deer migrated similar distances in autumn with comparable rates and patterns of travel.
Dispersive effects from a comparison of electron and positron scattering from
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul Gueye; M. Bernheim; J. F. Danel
1998-05-01
Dispersive effects have been investigated by comparing elastic scattering of electrons and positrons from {sup 12}C at the Saclay Linear Accelerator. The results demonstrate that dispersive effects at energies of 262 MeV and 450 MeV are less than 2% below the first diffraction minimum [0.95 < q{sub eff} (fm{sup -1}) < 1.66] in agreement with the prediction of Friar and Rosen. At the position of this minimum (q{sub eff} = 1.84 fm{sup -1}), the deviation between the positron scattering cross section and the cross section derived from the electron results is -44% {+-} 30%.
40 CFR 63.7751 - What reports must I submit and when?
Code of Federal Regulations, 2010 CFR
2010-07-01
... deviations from any emissions limitations (including operating limit), work practice standards, or operation and maintenance requirements, a statement that there were no deviations from the emissions limitations...-of-control during the reporting period. (7) For each deviation from an emissions limitation...
Feeney, Joanne; Savva, George M; O'Regan, Claire; King-Kallimanis, Bellinda; Cronin, Hilary; Kenny, Rose Anne
2016-05-31
Knowing the reliability of cognitive tests, particularly those commonly used in clinical practice, is important in order to interpret the clinical significance of a change in performance or a low score on a single test. To report the intra-class correlation (ICC), standard error of measurement (SEM) and minimum detectable change (MDC) for the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Color Trails Test (CTT) among community dwelling older adults. 130 participants aged 55 and older without severe cognitive impairment underwent two cognitive assessments between two and four months apart. Half the group changed rater between assessments and half changed time of day. Mean (standard deviation) MMSE was 28.1 (2.1) at baseline and 28.4 (2.1) at repeat. Mean (SD) MoCA increased from 24.8 (3.6) to 25.2 (3.6). There was a rater effect on CTT, but not on the MMSE or MoCA. The SEM of the MMSE was 1.0, leading to an MDC (based on a 95% confidence interval) of 3 points. The SEM of the MoCA was 1.5, implying an MDC95 of 4 points. MoCA (ICC = 0.81) was more reliable than MMSE (ICC = 0.75), but all tests examined showed substantial within-patient variation. An individual's score would have to change by greater than or equal to 3 points on the MMSE and 4 points on the MoCA for the rater to be confident that the change was not due to measurement error. This has important implications for epidemiologists and clinicians in dementia screening and diagnosis.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2016-03-08
A search is performed for the production of high-mass resonances decaying into a photon and a jet in 3.2 fb -1 of proton-proton collisions at a centre-of-mass energy of √s =13 TeV collected by the ATLAS detector at the Large Hadron Collider. Selected events have an isolated photon and a jet, each with transverse momentum above 150 GeV. No significant deviation of the γ+jet invariant mass distribution from the background-only hypothesis is found. Limits are set at 95% confidence level on the cross sections of generic Gaussian-shaped signals and of a few benchmark phenomena beyond the Standard Model: excited quarksmore » with vector-like couplings to the Standard Model particles, and non-thermal quantum black holes in two models of extra spatial dimensions. The minimum excluded visible cross sections for Gaussian-shaped resonances with width-to-mass ratios of 2% decrease from about 6 fb for a mass of 1.5 TeV to about 0.8 fb for a mass of 5 TeV. The minimum excluded visible cross sections for Gaussian-shaped resonances with width-to-mass ratios of 15% decrease from about 50 fb for a mass of 1.5 TeV to about 1.0 fb for a mass of 5 TeV. As a result, excited quarks are excluded below masses of 4.4 TeV, and non-thermal quantum black holes are excluded below masses of 3.8 (6.2) TeV for Randall-Sundrum (Arkani-Hamed-Dimopoulous-Dvali) models with one (six) extra dimensions.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-01
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-04
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-02
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the..., ``Certification of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-10
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-14
... Current List of Laboratories Which Meet Minimum Standards To Engage in Urine Drug Testing for Federal... Drug Testing Programs (Mandatory Guidelines). The Mandatory Guidelines were first published in the... of Laboratories Engaged in Urine Drug Testing for Federal Agencies,'' sets strict standards that...
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
ERIC Educational Resources Information Center
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
Screen Twice, Cut Once: Assessing the Predictive Validity of Teacher Selection Tools
ERIC Educational Resources Information Center
Goldhaber, Dan; Grout, Cyrus; Huntington-Klein, Nick
2015-01-01
It is well documented that teachers can have profound effects on student outcomes. Empirical estimates find that a one standard deviation increase in teacher quality raises student test achievement by 10 to 25 percent of a standard deviation. More recent evidence shows that the effectiveness of teachers can affect long-term student outcomes, such…
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Parabolic trough receiver heat loss and optical efficiency round robin 2015/2016
NASA Astrophysics Data System (ADS)
Pernpeintner, Johannes; Schiricke, Björn; Sallaberry, Fabienne; de Jalón, Alberto García; López-Martín, Rafael; Valenzuela, Loreto; de Luca, Antonio; Georg, Andreas
2017-06-01
A round robin for parabolic trough receiver heat loss and optical efficiency in the laboratory was performed between five institutions using five receivers in 2015/2016. Heat loss testing was performed at three cartridge heater test benches and one Joule heating test bench in the temperature range between 100 °C and 550 °C. Optical efficiency testing was performed with two spectrometric test bench and one calorimetric test bench. Heat loss testing results showed standard deviations at the order of 6% to 12 % for most temperatures and receivers and a standard deviation of 17 % for one receiver at 100 °C. Optical efficiency is presented normalized for laboratories showing standard deviations of 0.3 % to 1.3 % depending on the receiver.
Benign positional vertigo and hyperuricaemia.
Adam, A M
2005-07-01
To find out if there is any association between serum uric acid level and positional vertigo. A prospective, case controlled study. A private neurological clinic. All patients presenting with vertigo. Ninety patients were seen in this period with 78 males and 19 females. Mean age was 47 +/- 3 years (at 95% confidence level) with a standard deviation of 12.4. Their mean uric acid level was 442 +/- 16 (at 95% confidence level) with a standard deviation of 79.6 umol/l as compared to 291 +/- 17 (at 95% confidence level) with a standard deviation of 79.7 umol/l in the control group. The P-value was less than 0.001. That there is a significant association between high uric acid and benign positional vertigo.
NASA Technical Reports Server (NTRS)
Clark, P. E.; Andre, C. G.; Adler, I.; Weidner, J.; Podwysocki, M.
1976-01-01
The positive correlation between Al/Si X-ray fluorescence intensity ratios determined during the Apollo 15 lunar mission and a broad-spectrum visible albedo of the moon is quantitatively established. Linear regression analysis performed on 246 1 degree geographic cells of X-ray fluorescence intensity and visible albedo data points produced a statistically significant correlation coefficient of .78. Three distinct distributions of data were identified as (1) within one standard deviation of the regression line, (2) greater than one standard deviation below the line, and (3) greater than one standard deviation above the line. The latter two distributions of data were found to occupy distinct geographic areas in the Palus Somni region.
Screening Samples for Arsenic by Inductively Coupled Plasma-Mass Spectrometry for Treaty Samples
2014-02-01
2.274 3.657 10.06 14.56 30.36 35.93 % RSD : 15.87% 4.375% 2.931% 4.473% 3.349% 3.788% 2.802% 3.883% 3.449% RSD , relative standard deviation 9 Table...107.9% 106.4% Standard Deviation: 0.3171 0.3498 0.8024 2.964 4.526 10.06 13.83 16.38 11.81 % RSD : 5.657% 3.174% 3.035% 5.507% 4.332% 3.795% 2.626...119.1% 116.5% 109.4% 106.8% 105.2% 105.5% 105.8% 108.6% 107.8% Standard Deviation: 0.2379 0.5595 1.173 2.375 2.798 5.973 11.79 15.10 30.54 % RSD
Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P
2011-01-01
Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496
MINIMUM STANDARDS FOR APPROVAL OF GUIDANCE PROGRAMS IN SMALL HIGH SCHOOLS.
ERIC Educational Resources Information Center
New Mexico State Dept. of Education, Santa Fe.
A SMALL HIGH SCHOOL IS DEFINED AS ONE WITH AN ENROLLMENT OF 150 STUDENTS OR LESS IN GRADES 7-12 OR IN GRADES 9-12. THE MINIMUM GUIDANCE PROGRAM STANDARDS FOR SMALL HIGH SCHOOLS, AS PRESCRIBED BY THE NEW MEXICO DEPARTMENT OF EDUCATION, INCLUDE THE FOLLOWING REQUIREMENTS--(1) ONE PERSON WITH A MINIMUM OF SIX SEMESTER HOURS IN GUIDANCE MUST BE…
NASA Astrophysics Data System (ADS)
Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha
2017-07-01
After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.
A deviation display method for visualising data in mobile gamma-ray spectrometry.
Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Ostlund, Karl; Samuelsson, Christer
2010-09-01
A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialization time of about 10min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.
Han, Xu; Suo, Shiteng; Sun, Yawen; Zu, Jinyan; Qu, Jianxun; Zhou, Yan; Chen, Zengai; Xu, Jianrong
2017-03-01
To compare four methods of region-of-interest (ROI) placement for apparent diffusion coefficient (ADC) measurements in distinguishing low-grade gliomas (LGGs) from high-grade gliomas (HGGs). Two independent readers measured ADC parameters using four ROI methods (single-slice [single-round, five-round and freehand] and whole-volume) on 43 patients (20 LGGs, 23 HGGs) who had undergone 3.0 Tesla diffusion-weighted imaging and time required for each method of ADC measurements was recorded. Intraclass correlation coefficients (ICCs) were used to assess interobserver variability of ADC measurements. Mean and minimum ADC values and time required were compared using paired Student's t-tests. All ADC parameters (mean/minimum ADC values of three single-slice methods, mean/minimum/standard deviation/skewness/kurtosis/the10 th and 25 th percentiles/median/maximum of whole-volume method) were correlated with tumor grade (low versus high) by unpaired Student's t-tests. Discriminative ability was determined by receiver operating characteristic curves. All ADC measurements except minimum, skewness, and kurtosis of whole-volume ROI differed significantly between LGGs and HGGs (all P < 0.05). Mean ADC value of single-round ROI had the highest effect size (0.72) and the greatest areas under the curve (0.872). Three single-slice methods had good to excellent ICCs (0.67-0.89) and the whole-volume method fair to excellent ICCs (0.32-0.96). Minimum ADC values differed significantly between whole-volume and single-round ROI (P = 0.003) and, between whole-volume and five-round ROI (P = 0.001). The whole-volume method took significantly longer than all single-slice methods (all P < 0.001). ADC measurements are influenced by ROI determination methods. Whole-volume histogram analysis did not yield better results than single-slice methods and took longer. Mean ADC value derived from single-round ROI is the most optimal parameter for differentiating LGGs from HGGs. 3 J. Magn. Reson. Imaging 2017;45:722-730. © 2016 International Society for Magnetic Resonance in Medicine.
Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen
2011-01-01
Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.
Code of Federal Regulations, 2010 CFR
2010-07-01
... which are distinct from the standard deviation process and specific to the requirements of the Federal... agency request a deviation from the provisions of this part? 102-38.30 Section 102-38.30 Public Contracts... executive agency request a deviation from the provisions of this part? Refer to §§ 102-2.60 through 102-2...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-14
... August 31, 2012. Ray Towles, Deputy Director, Flight Standards Service. Adoption of the Amendment... Muni, RNAV (GPS) RWY 36, Amdt 1 Mountain View, CA, Moffett Federal Airfield, Takeoff Minimums and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-16
... agents and toxins list; whether minimum standards for personnel reliability, physical and cyber security... toxins list; (3) whether minimum standards for personnel reliability, physical and cyber security should...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... CONTACT: Richard A. Dunham III, Flight Procedure Standards Branch (AFS-420), Flight Technologies and... 2012 Red Cloud, NE., Red Cloud Muni, Takeoff Minimums and Obstacle DP, Orig Effective 23 AUGUST 2012...
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Family structure and childhood anthropometry in Saint Paul, Minnesota in 1918
Warren, John Robert
2017-01-01
Concern with childhood nutrition prompted numerous surveys of children’s growth in the United States after 1870. The Children’s Bureau’s 1918 “Weighing and Measuring Test” measured two million children to produce the first official American growth norms. Individual data for 14,000 children survives from the Saint Paul, Minnesota survey whose stature closely approximated national norms. As well as anthropometry the survey recorded exact ages, street address and full name. These variables allow linkage to the 1920 census to obtain demographic and socioeconomic information. We matched 72% of children to census families creating a sample of nearly 10,000 children. Children in the entire survey (linked set) averaged 0.74 (0.72) standard deviations below modern WHO height-for-age standards, and 0.48 (0.46) standard deviations below modern weight-for-age norms. Sibship size strongly influenced height-for-age, and had weaker influence on weight-for-age. Each additional child six or underreduced height-for-age scores by 0.07 standard deviations (95% CI: −0.03, 0.11). Teenage siblings had little effect on height-forage. Social class effects were substantial. Children of laborers averaged half a standard deviation shorter than children of professionals. Family structure and socio-economic status had compounding impacts on children’s stature. PMID:28943749
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
Corneal Epithelium Thickness Profile in 614 Normal Chinese Children Aged 7-15 Years Old.
Ma, Yingyan; He, Xiangui; Zhu, Xiaofeng; Lu, Lina; Zhu, Jianfeng; Zou, Haidong
2016-03-23
The purpose of the study is to describe the values and distribution of corneal epithelium thickness (CET) in normal Chinese school-aged children, and to explore associated factors with CET. CET maps were measured by Fourier-domain optical coherence tomography (FD-OCT) in normal Chinese children aged 7 to 15 years old from two randomly selected schools in Shanghai, China. Children with normal intraocular pressure were further examined for cycloplegic autorefraction, corneal curvature radius (CCR) and axial length. Central (2-mm diameter area), para-central (2- to 5-mm diameter area), and peripheral (5- to 6-mm diameter area) CET in the superior, superotemporal, temporal, inferotemporal, inferior, inferonasal, nasal, superonasal cornea; minimum, maximum, range, and standard deviation of CET within the 5-mm diameter area were recorded. The CET was thinner in the superior than in the inferior and was thinner in the temporal than in the nasal. The maximum CET was located in the inferior zone, and the minimum CET was in the superior zone. A thicker central CET was associated with male gender (p = 0.009) and older age (p = 0.037) but not with CCR (p = 0.061), axial length (p = 0.253), or refraction (p = 0.351) in the multiple regression analyses. CCR, age, and gender were correlated with para-central and peripheral CET.