Pelham, Sabra D
2011-03-01
English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuangrod, T; Simpson, J; Greer, P
Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less
SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Defoor, D; Kabat, C; Papanikolaou, N
Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 secondsmore » while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.« less
NASA Astrophysics Data System (ADS)
Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang
2015-03-01
In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a range of -6.5% to 20%.
Saathoff, April M; MacDonald, Ryan; Krenzischek, Erundina
2018-03-01
The objective of this study was to evaluate the impact of specimen collection technology implementation featuring computerized provider order entry, positive patient identification, bedside specimen label printing, and barcode scanning on the reduction of mislabeled specimens and collection turnaround times in the emergency, medical-surgical, critical care, and maternal child health departments at a community teaching hospital. A quantitative analysis of a nonrandomized, pre-post intervention study design evaluated the statistical significance of reduction of mislabeled specimen percentages and collection turnaround times affected by the implementation of specimen collection technology. Mislabeled specimen percentages in all areas decreased from an average of 0.020% preimplementation to an average of 0.003% postimplementation, with a P < .001. Collection turnaround times longer than 60 minutes decreased after the implementation of specimen collection technology by an average of 27%, with a P < .001. Specimen collection and identification errors are a significant problem in healthcare, contributing to incorrect diagnoses, delayed care, lack of essential treatments, and patient injury or death. Collection errors can also contribute to an increased length of stay, increased healthcare costs, and decreased patient satisfaction. Specimen collection technology has structures in place to prevent collection errors and improve the overall efficiency of the specimen collection process.
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2012 CFR
2012-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2010 CFR
2010-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2014 CFR
2014-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2013 CFR
2013-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2011 CFR
2011-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
Zumsteg, Zachary; DeMarco, John; Lee, Steve P; Steinberg, Michael L; Lin, Chun Shu; McBride, William; Lin, Kevin; Wang, Pin-Chieh; Kupelian, Patrick; Lee, Percy
2012-06-01
On-board cone-beam computed tomography (CBCT) is currently available for alignment of patients with head-and-neck cancer before radiotherapy. However, daily CBCT is time intensive and increases the overall radiation dose. We assessed the feasibility of using the average couch shifts from the first several CBCTs to estimate and correct for the presumed systematic setup error. 56 patients with head-and-neck cancer who received daily CBCT before intensity-modulated radiation therapy had recorded shift values in the medial-lateral, superior-inferior, and anterior-posterior dimensions. The average displacements in each direction were calculated for each patient based on the first five or 10 CBCT shifts and were presumed to represent the systematic setup error. The residual error after this correction was determined by subtracting the calculated shifts from the shifts obtained using daily CBCT. The magnitude of the average daily residual three-dimensional (3D) error was 4.8 ± 1.4 mm, 3.9 ± 1.3 mm, and 3.7 ± 1.1 mm for uncorrected, five CBCT corrected, and 10 CBCT corrected protocols, respectively. With no image guidance, 40.8% of fractions would have been >5 mm off target. Using the first five CBCT shifts to correct subsequent fractions, this percentage decreased to 19.0% of all fractions delivered and decreased the percentage of patients with average daily 3D errors >5 mm from 35.7% to 14.3% vs. no image guidance. Using an average of the first 10 CBCT shifts did not significantly improve this outcome. Using the first five CBCT shift measurements as an estimation of the systematic setup error improves daily setup accuracy for a subset of patients with head-and-neck cancer receiving intensity-modulated radiation therapy and primarily benefited those with large 3D correction vectors (>5 mm). Daily CBCT is still necessary until methods are developed that more accurately determine which patients may benefit from alternative imaging strategies. Copyright © 2012 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-21
... other errors, would result in (1) a change of at least five absolute percentage points in, but not less...) preliminary determination, or (2) a difference between a weighted-average dumping margin of zero or de minimis...
Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W
2017-11-01
The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle. To determine the reliability of an automated markerless motion-capture system for scoring the LESS. Cross-sectional study. United States Military Academy. A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg). Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score. We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons. A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use the markerless motion-capture system to reliably score the LESS without being limited by the time requirements of manual LESS scoring.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
... errors, (1) would result in a change of at least five absolute percentage points in, but not less than 25... determination; or (2) would result in a difference between a weighted-average dumping margin of zero or de...
Evaluation of causes and frequency of medication errors during information technology downtime.
Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F
2009-06-15
The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.
Implementation of trigonometric function using CORDIC algorithms
NASA Astrophysics Data System (ADS)
Mokhtar, A. S. N.; Ayub, M. I.; Ismail, N.; Daud, N. G. Nik
2018-02-01
In 1959, Jack E. Volder presents a brand new formula to the real-time solution of the equation raised in navigation system. This new algorithm was the most beneficial replacement of analog navigation system by the digital. The CORDIC (Coordinate Rotation Digital Computer) algorithm are used for the rapid calculation associated with elementary operates like trigonometric function, multiplication, division and logarithm function, and also various conversions such as conversion of rectangular to polar coordinate including the conversion between binary coded information. In this current time CORDIC formula have many applications in the field of communication, signal processing, 3-D graphics, and others. This paper would be presents the trigonometric function implementation by using CORDIC algorithm in rotation mode for circular coordinate system. The CORDIC technique is used in order to generating the output angle between range 0o to 90o and error analysis is concern. The result showed that the average percentage error is about 0.042% at angles between ranges 00 to 900. But the average percentage error rose up to 45% at angle 90o and above. So, this method is very accurate at the 1st quadrant. The mirror properties method is used to find out an angle at 2nd, 3rd and 4th quadrant.
Nanidis, Theodore G; Ridha, Hyder; Jallali, Navid
2014-10-01
Estimation of the volume of abdominal tissue is desirable when planning autologous abdominal based breast reconstruction. However, this can be difficult clinically. The aim of this study was to develop a simple, yet reliable method of calculating the deep inferior epigastric artery perforator flap weight using the routine preoperative computed tomography angiogram (CTA) scan. Our mathematical formula is based on the shape of a DIEP flap resembling that of an isosceles triangular prism. Thus its volume can be calculated with a standard mathematical formula. Using bony landmarks three measurements were acquired from the CTA scan to calculate the flap weight. This was then compared to the actual flap weight harvested in both a retrospective feasibility and prospective study. In the retrospective group 17 DIEP flaps in 17 patients were analyzed. Average predicted flap weight was 667 g (range 293-1254). The average actual flap weight was 657 g (range 300-1290) giving an average percentage error of 6.8% (p-value for weight difference 0.53). In the prospective group 15 DIEP flaps in 15 patients were analyzed. Average predicted flap weight was 618 g (range 320-925). The average actual flap weight was 624 g (range 356-970) giving an average percentage error of 6.38% (p-value for weight difference 0.57). This formula is a quick, reliable and accurate way of estimating the volume of abdominal tissue using the preoperative CTA scan. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Guo, Xiang; Wang, Ming Tian; Zhang, Guo Zhi
2017-12-01
The winter reproductive areas of Puccinia striiformis var. striiformis in Sichuan Basin are often the places mostly affected by wheat stripe rust. With data on the meteorological condition and stripe rust situation at typical stations in the winter reproductive area in Sichuan Basin from 1999 to 2016, this paper classified the meteorological conditions inducing wheat stripe rust into 5 grades, based on the incidence area ratio of the disease. The meteorological factors which were biologically related to wheat stripe rust were determined through multiple analytical methods, and a meteorological grade model for forecasting wheat stripe rust was created. The result showed that wheat stripe rust in Sichuan Basin was significantly correlated with many meteorological factors, such as the ave-rage (maximum and minimum) temperature, precipitation and its anomaly percentage, relative humidity and its anomaly percentage, average wind speed and sunshine duration. Among these, the average temperature and the anomaly percentage of relative humidity were the determining factors. According to a historical retrospective test, the accuracy of the forecast based on the model was 64% for samples in the county-level test, and 89% for samples in the municipal-level test. In a meteorological grade forecast of wheat stripe rust in the winter reproductive areas in Sichuan Basin in 2017, the prediction was accurate for 62.8% of the samples, with 27.9% error by one grade and only 9.3% error by two or more grades. As a result, the model could deliver satisfactory forecast results, and predicate future wheat stripe rust from a meteorological point of view.
Colour application on mammography image segmentation
NASA Astrophysics Data System (ADS)
Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.
2017-09-01
The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).
[Conversion methods of freshwater snail tissue dry mass and ash free dry mass].
Zhao, Wei-Hua; Wang, Hai-Jun; Wang, Hong-Zhu; Liu, Xue-Qin
2009-06-01
Mollusk biomass is usually expressed as wet mass with shell, but this expression fails to represent real biomass due to the high calcium carbonate content in shells. Tissue dry mass and ash free dry mass are relatively close to real biomass. However, the determination process of these two parameters is very complicated, and thus, it is necessary to establish simple and practical conversion methods for these two parameters. A total of six taxa of freshwater snails (Bellamya sp., Alocinma longicornis, Parafossarulus striatulus, Parafossarulus eximius, Semisulcospira cancellata, and Radix sp.) common in the Yangtze Basin were selected to explore the relations of their five shell dimension parameters, dry and wet mass with shells with their tissue dry mass and ash free dry mass. The regressions of the tissue dry mass and ash free dry mass with the five shell dimension parameters were all exponential (y = ax(b)). Among them, shell width and shell length were more precise (the average percentage error between observed and predicted value being 22.0% and 22.5%, respectively) than the other three parameters in the conversion of dry mass. Wet mass with shell could be directly converted to tissue dry mass and ash free dry mass, with an average percentage error of 21.7%. According to the essence of definition and the errors of conversion, ash free dry mass would be the optimum parameter to express snail biomass.
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Improving the Glucose Meter Error Grid With the Taguchi Loss Function.
Krouwer, Jan S
2016-07-01
Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.
Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S.; Liu, Zhengwen; Lv, Yi; Shi, Bingyin
2014-01-01
This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students. PMID:25329685
Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S; Liu, Zhengwen; Lv, Yi; Shi, Bingyin
2014-01-01
This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students.
Ramalingam, Shivaji G; Pré, Pascaline; Giraudet, Sylvain; Le Coq, Laurence; Le Cloirec, Pierre; Baudouin, Olivier; Déchelotte, Stéphane
2012-02-29
The regeneration experiments of dichloromethane from activated carbon bed had been carried out by both hot nitrogen and steam to evaluate the regeneration performance and the operating cost of the regeneration step. Factorial Experimental Design (FED) tool had been implemented to optimize the temperature of nitrogen and the superficial velocity of the nitrogen to achieve maximum regeneration at an optimized operating cost. All the experimental results of adsorption step, hot nitrogen and steam regeneration step had been validated by the simulation model PROSIM. The average error percentage between the simulation and experiment based on the mass of adsorption of dichloromethane was 2.6%. The average error percentages between the simulations and experiments based on the mass of dichloromethane regenerated by nitrogen regeneration and steam regeneration were 3 and 12%, respectively. From the experiments, it had been shown that both the hot nitrogen and steam regeneration had regenerated 84% of dichloromethane. But the choice of hot nitrogen or steam regeneration depends on the regeneration time, operating costs, and purity of dichloromethane regenerated. A thorough investigation had been made about the advantages and limitations of both the hot nitrogen and steam regeneration of dichloromethane. Copyright © 2011 Elsevier B.V. All rights reserved.
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).
Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B
2010-04-01
Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.
Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh
2017-03-01
Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.
2016-12-01
This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
[Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].
Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang
2016-07-12
To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.
A Variable Flow Modelling Approach To Military End Strength Planning
2016-12-01
programming MAPE mean average percentage error MLRPS Manpower Long-Range Planning System MT marine technician OR operations research RAN Royal...OR Practice—The Army Manpower Long-Range Planning System. Operations Research , 36(1), 5–17. http://dx.doi.org/10.1287/opre.36.1.5 Guerry, M. A...unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The purpose of this thesis is to develop a model to assist military manpower planners in
A new method to estimate average hourly global solar radiation on the horizontal surface
NASA Astrophysics Data System (ADS)
Pandey, Pramod K.; Soupir, Michelle L.
2012-10-01
A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.
Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S
2013-03-01
To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Urine specific gravity measurement: reagent strip versus refractometer.
Brandon, C A
1994-01-01
To compare the results of urinalysis screenings for specific gravity (SG) using the reagent strip and the refractometer. United Hospital, Grand Forks, North Dakota. United Hospital is a 384-bed teaching hospital. PRODUCT COMPARISON: The Ames Multistix 10 SG reagent strip (Miles, Inc., Elkhart, IN 46515) was compared with the TS Meter (Leica, Inc., Deerfield, IL 60015). The degree of correlation between the results produced by each method. The percentage of difference between the means of the direct strip readings and the refractometer readings was 9.68%. The percentage of difference between the means of the adjusted strip readings and the refractometer readings was 22.58%, which was significantly different. When the direct strip readings and the refractometer readings were plotted together on a graph, the points were widely scattered; this fact, and a correlation coefficient of 0.725, suggest that random error occurred in both methods. Analysis of the slope and intercept of the correlation indicated systematic error. The reagent strip method of measuring SG is accurate only in a narrow range of "average" values, and should not be used as the basis for medical diagnoses.
Measurement effects of seasonal and monthly variability on pedometer-determined data.
Kang, Minsoo; Bassett, David R; Barreira, Tiago V; Tudor-Locke, Catrine; Ainsworth, Barbara E
2012-03-01
The seasonal and monthly variability of pedometer-determined physical activity and its effects on accurate measurement have not been examined. The purpose of the study was to reduce measurement error in step-count data by controlling a) the length of the measurement period and b) the season or month of the year in which sampling was conducted. Twenty-three middle-aged adults were instructed to wear a Yamax SW-200 pedometer over 365 consecutive days. The step-count measurement periods of various lengths (eg, 2, 3, 4, 5, 6, 7 days, etc.) were randomly selected 10 times for each season and month. To determine accurate estimates of yearly step-count measurement, mean absolute percentage error (MAPE) and bias were calculated. The year-round average was considered as a criterion measure. A smaller MAPE and bias represent a better estimate. Differences in MAPE and bias among seasons were trivial; however, they varied among different months. The months in which seasonal changes occur presented the highest MAPE and bias. Targeting the data collection during certain months (eg, May) may reduce pedometer measurement error and provide more accurate estimates of year-round averages.
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
Optical matrix-matrix multiplication method demonstrated by the use of a multifocus hololens
NASA Technical Reports Server (NTRS)
Liu, H. K.; Liang, Y.-Z.
1984-01-01
A method of optical matrix-matrix multiplication is presented. The feasibility of the method is also experimentally demonstrated by the use of a dichromated-gelatin multifocus holographic lens (hololens). With the specific values of matrices chosen, the average percentage error between the theoretical and experimental data of the elements of the output matrix of the multiplication of some specific pairs of 3 x 3 matrices is 0.4 percent, which corresponds to an 8-bit accuracy.
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
State-of-the-Art pH Electrode Quality Control for Measurements of Acidic, Low Ionic Strength Waters.
ERIC Educational Resources Information Center
Stapanian, Martin A.; Metcalf, Richard C.
1990-01-01
Described is the derivation of the relationship between the pH measurement error and the resulting percentage error in hydrogen ion concentration including the use of variable activity coefficients. The relative influence of the ionic strength of the solution on the percentage error is shown. (CW)
Measures of model performance based on the log accuracy ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Measures of model performance based on the log accuracy ratio
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
2018-01-03
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Akrami, Mohammad; Qian, Zhihui; Zou, Zhemin; Howard, David; Nester, Chris J; Ren, Lei
2018-04-01
The objective of this study was to develop and validate a subject-specific framework for modelling the human foot. This was achieved by integrating medical image-based finite element modelling, individualised multi-body musculoskeletal modelling and 3D gait measurements. A 3D ankle-foot finite element model comprising all major foot structures was constructed based on MRI of one individual. A multi-body musculoskeletal model and 3D gait measurements for the same subject were used to define loading and boundary conditions. Sensitivity analyses were used to investigate the effects of key modelling parameters on model predictions. Prediction errors of average and peak plantar pressures were below 10% in all ten plantar regions at five key gait events with only one exception (lateral heel, in early stance, error of 14.44%). The sensitivity analyses results suggest that predictions of peak plantar pressures are moderately sensitive to material properties, ground reaction forces and muscle forces, and significantly sensitive to foot orientation. The maximum region-specific percentage change ratios (peak stress percentage change over parameter percentage change) were 1.935-2.258 for ground reaction forces, 1.528-2.727 for plantar flexor muscles and 4.84-11.37 for foot orientations. This strongly suggests that loading and boundary conditions need to be very carefully defined based on personalised measurement data.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Funds and State Matching and Maintenance-of-Effort (MOE Funds): (1) Percentage of cases with an error... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of cases with an improper payment (both over and under payments), expressed as the total number of cases in...
June and August median streamflows estimated for ungaged streams in southern Maine
Lombard, Pamela J.
2010-01-01
Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
Phillips, Steven P.; Belitz, Kenneth
1991-01-01
The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
The heterogeneity statistic I(2) can be biased in small meta-analyses.
von Hippel, Paul T
2015-04-14
Estimated effects vary across studies, partly because of random sampling error and partly because of heterogeneity. In meta-analysis, the fraction of variance that is due to heterogeneity is estimated by the statistic I(2). We calculate the bias of I(2), focusing on the situation where the number of studies in the meta-analysis is small. Small meta-analyses are common; in the Cochrane Library, the median number of studies per meta-analysis is 7 or fewer. We use Mathematica software to calculate the expectation and bias of I(2). I(2) has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small, but the bias is typically negative when the true fraction of heterogeneity is large. For example, with 7 studies and no true heterogeneity, I(2) will overestimate heterogeneity by an average of 12 percentage points, but with 7 studies and 80 percent true heterogeneity, I(2) can underestimate heterogeneity by an average of 28 percentage points. Biases of 12-28 percentage points are not trivial when one considers that, in the Cochrane Library, the median I(2) estimate is 21 percent. The point estimate I(2) should be interpreted cautiously when a meta-analysis has few studies. In small meta-analyses, confidence intervals should supplement or replace the biased point estimate I(2).
Wilde, M C; Boake, C; Sherer, M
2000-01-01
Final broken configuration errors on the Wechsler Adult Intelligence Scale-Revised (WAIS-R; Wechsler, 1981) Block Design subtest were examined in 50 moderate and severe nonpenetrating traumatically brain injured adults. Patients were divided into left (n = 15) and right hemisphere (n = 19) groups based on a history of unilateral craniotomy for treatment of an intracranial lesion and were compared to a group with diffuse or negative brain CT scan findings and no history of neurosurgery (n = 16). The percentage of final broken configuration errors was related to injury severity, Benton Visual Form Discrimination Test (VFD; Benton, Hamsher, Varney, & Spreen, 1983) total score and the number of VFD rotation and peripheral errors. The percentage of final broken configuration errors was higher in the patients with right craniotomies than in the left or no craniotomy groups, which did not differ. Broken configuration errors did not occur more frequently on designs without an embedded grid pattern. Right craniotomy patients did not show a greater percentage of broken configuration errors on nongrid designs as compared to grid designs.
van Walbeek, Corné
2014-01-01
Background The tobacco industry claims that illicit trade in cigarettes has increased sharply since the 1990s and that government has lost substantial tax revenue. Objectives (1) To determine whether cigarette excise tax revenue has been below budget in recent years, compared with previous decades. (2) To determine trends in the size of the illicit market since 1995. Methods For (1), mean percentage errors and root mean square percentage errors were calculated for budget revenue deviation for three products (cigarettes, beer and spirits), for various subperiods. For (2), predicted changes in total consumption, using actual cigarette price and GDP changes and previously published price and income elasticity estimates, were calculated and compared with changes in tax-paid consumption. Results Cigarette excise revenues were 0.7% below budget for 2000–2012 on average, compared with 3.0% below budget for beer and 4.7% below budget for spirits. There is no evidence that illicit trade in cigarettes in South Africa increased between 2002 and 2009. There is a substantial increase in illicit trade in 2010, probably peaking in 2011. In 2012 tax-paid consumption of cigarettes increased 2.6%, implying that the illicit market share decreased an estimated 0.6 percentage points. Conclusions Other than in 2010, there is no evidence that illicit trade is significantly undermining government revenue. Claims that illicit trade has consistently increased over the past 15 years, and has continued its sharp increase since 2010, are not supported. PMID:24431121
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.
26 CFR 1.410(b)-5 - Average benefit percentage test.
Code of Federal Regulations, 2011 CFR
2011-04-01
... percentage of a group of employees for a testing period is the average of the employee benefit percentages... different definitions of average annual compensation; (C) Use of different testing ages; (D) Use of...) Restriction on use of separate testing group determination method. A plan does not satisfy the average benefit...
Cecconi, Maurizio; Rhodes, Andrew; Poloniecki, Jan; Della Rocca, Giorgio; Grounds, R Michael
2009-01-01
Bland-Altman analysis is used for assessing agreement between two measurements of the same clinical variable. In the field of cardiac output monitoring, its results, in terms of bias and limits of agreement, are often difficult to interpret, leading clinicians to use a cutoff of 30% in the percentage error in order to decide whether a new technique may be considered a good alternative. This percentage error of +/- 30% arises from the assumption that the commonly used reference technique, intermittent thermodilution, has a precision of +/- 20% or less. The combination of two precisions of +/- 20% equates to a total error of +/- 28.3%, which is commonly rounded up to +/- 30%. Thus, finding a percentage error of less than +/- 30% should equate to the new tested technique having an error similar to the reference, which therefore should be acceptable. In a worked example in this paper, we discuss the limitations of this approach, in particular in regard to the situation in which the reference technique may be either more or less precise than would normally be expected. This can lead to inappropriate conclusions being drawn from data acquired in validation studies of new monitoring technologies. We conclude that it is not acceptable to present comparison studies quoting percentage error as an acceptability criteria without reporting the precision of the reference technique.
Ries, Kernell G.; Eng, Ken
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima
Sensitivity of feedforward neural networks to weight errors
NASA Technical Reports Server (NTRS)
Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney
1990-01-01
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-03-23
We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.
NASA Astrophysics Data System (ADS)
Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim
2016-11-01
In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.
Scatter and veiling glare corrections for quantitative digital subtraction angiography
NASA Astrophysics Data System (ADS)
Ersahin, Atila; Molloi, Sabee Y.; Qian, Yao-Jin
1994-05-01
In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare (SVG), which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate SVG distribution in DSA images without the need to sample the SVG for each patient. This technique utilizes exposure parameters and image gray levels to estimate SVG intensity by predicting the total thickness for every pixel in the image. At this point, corrections were also made for variation of SVG fraction with beam energy and field size. To test its ability to estimate SVG intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates were obtained by comparison with direct SVG measurements made behind a lead strip. The average rms percentage errors in the SVG estimate for the 25 phantom studies and for the 17 animal studies were 6.22% and 7.96%, respectively. These results indicate that the SVG intensity can be estimated for a wide range of thicknesses, projections, and beam energies.
Chae, Jin Seok; Park, Jin; So, Wi-Young
2017-07-28
The purpose of this study was to suggest a ranking prediction model using the competition record of the Ladies Professional Golf Association (LPGA) players. The top 100 players on the tour money list from the 2013-2016 US Open were analyzed in this model. Stepwise regression analysis was conducted to examine the effect of performance and independent variables (i.e., driving accuracy, green in regulation, putts per round, driving distance, percentage of sand saves, par-3 average, par-4 average, par-5 average, birdies average, and eagle average) on dependent variables (i.e., scoring average, official money, top-10 finishes, winning percentage, and 60-strokes average). The following prediction model was suggested:Y (Scoring average) = 55.871 - 0.947 (Birdies average) + 4.576 (Par-4 average) - 0.028 (Green in regulation) - 0.012 (Percentage of sand saves) + 2.088 (Par-3 average) - 0.026 (Driving accuracy) - 0.017 (Driving distance) + 0.085 (Putts per round)Y (Official money) = 6628736.723 + 528557.907 (Birdies average) - 1831800.821 (Par-4 average) + 11681.739 (Green in regulation) + 6476.344 (Percentage of sand saves) - 688115.074 (Par-3 average) + 7375.971 (Driving accuracy)Y (Top-10 finish%) = 204.462 + 12.562 (Birdies average) - 47.745 (Par-4 average) + 1.633 (Green in regulation) - 5.151 (Putts per round) + 0.132 (Percentage of sand saves)Y (Winning percentage) = 49.949 + 3.191 (Birdies average) - 15.023 (Par-4 average) + 0.043 (Percentage of sand saves)Y (60-strokes average) = 217.649 + 13.978 (Birdies average) - 44.855 (Par-4 average) - 22.433 (Par-3 average) + 0.16 (Green in regulation)Scoring of the above five prediction models and the prediction of golf ranking in the 2016 Women's Golf Olympic competition in Rio revealed a significant correlation between the predicted and real ranking (r = 0.689, p < 0.001) and between the predicted and the real average score (r = 0.653, p < 0.001). Our ranking prediction model using LPGA data may help coaches and players to identify which players are likely to participate in Olympic and World competitions, based on their performance.
Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun
2015-02-01
Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.
Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Renhui
2016-08-01
Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors with a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jie, E-mail: yangjie396768@163.com; School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044; Liu, Qingquan
Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors withmore » a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.« less
Comparative study of four time series methods in forecasting typhoid fever incidence in China.
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.
Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546
Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data
Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha
2016-01-01
Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059
Becerra-Luna, Brayans; Martínez-Memije, Raúl; Cartas-Rosado, Raúl; Infante-Vázquez, Oscar
To improve the identification of peaks and feet in photoplethysmographic (PPG) pulses deformed by myokinetic noise, through the implementation of a modified fingertip and applying adaptive filtering. PPG signals were recordedfrom 10 healthy volunteers using two photoplethysmography systems placed on the index finger of each hand. Recordings lasted three minutes andwere done as follows: during the first minute, both handswere at rest, and for the lasting two minutes only the left hand was allowed to make quasi-periodicmovementsin order to add myokinetic noise. Two methodologies were employed to process the signals off-line. One consisted on using an adaptive filter based onthe Least Mean Square (LMS) algorithm, and the other includeda preprocessing stage in addition to the same LMS filter. Both filtering methods were compared and the one with the lowest error was chosen to assess the improvement in the identification of peaks and feet from PPG pulses. Average percentage errorsobtained wereof 22.94% with the first filtering methodology, and 3.72% withthe second one. On identifying peaks and feet from PPG pulsesbefore filtering, error percentages obtained were of 24.26% and 48.39%, respectively, and once filtered error percentageslowered to 2.02% for peaks and 3.77% for feet. The attenuation of myokinetic noise in PPG pulses through LMS filtering, plusa preprocessing stage, allows increasingthe effectiveness onthe identification of peaks and feet from PPG pulses, which are of great importance for medical assessment. Copyright © 2016 Instituto Nacional de Cardiología Ignacio Chávez. Publicado por Masson Doyma México S.A. All rights reserved.
Nitschke, J E; Nattrass, C L; Disler, P B; Chou, M J; Ooi, K T
1999-02-01
Repeated measures design for intra- and interrater reliability. To determine the intra- and interrater reliability of the lumbar spine range of motion measured with a dual inclinometer, and the thoracolumbar spine range of motion measured with a long-arm goniometer, as recommended in the American Medical Association Guides. The American Medical Association Guides (2nd and 4th editions) recommend using measurements of thoracolumbar and lumbar range of movement, respectively, to estimate the percentage of permanent impairment in patients with chronic low back pain. However, the reliability of this method of estimating impairment has not been determined. In all, 34 subjects participated in the study, 21 women with a mean age of 40.1 years (SD, +/- 11.1) and 13 men with a mean age of 47.7 years (SD, +/- 12.1). Measures of thoracolumbar flexion, extension, lateral flexion, and rotation were obtained with a long-arm goniometer. Lumbar flexion, extension, and lateral flexion were measured with a dual inclinometer. Measurements were taken by two examiners on one occasion and by one examiner on two occasions approximately 1 week apart. The results showed poor intra- and interrater reliability for all measurements taken with both instruments. Measurement error expressed in degrees showed that measurements taken by different raters exhibited systematic as well as random differences. As a result, subjects measured by two different examiners on the same day, with either instrument, could give impairment ratings ranging between 0% and 18% of the whole person (excluding rotation), in which percentage impairment is calculated using the average range of motion and the average systematic and random error in degrees for the group for each movement (flexion, extension, and lateral flexion). The poor reliability of the American Medical Association Guides' spinal range of motion model can result in marked variation in the percentage of whole-body impairment. These findings have implications for compensation bodies in Australia and other countries that use the American Medical Association Guides' procedure to estimate impairment in chronic low back pain patients.
Holló, Gábor; Shu-Wei, Hsu; Naghizadeh, Farzaneh
2016-06-01
To compare the current (6.3) and a novel software version (6.12) of the RTVue-100 optical coherence tomograph (RTVue-OCT) for ganglion cell complex (GCC) and retinal nerve fiber layer thickness (RNFLT) image segmentation and detection of glaucoma in high myopia. RNFLT and GCC scans were acquired with software version 6.3 of the RTVue-OCT on 51 highly myopic eyes (spherical refractive error ≤-6.0 D) of 51 patients, and were analyzed with both the software versions. Twenty-two eyes were nonglaucomatous, 13 were ocular hypertensive and 16 eyes had glaucoma. No difference was seen for any RNFLT, and average GCC parameter between the software versions (paired t test, P≥0.084). Global loss volume was significantly lower (more normal) with version 6.12 than with version 6.3 (Wilcoxon signed-rank test, P<0.001). The percentage agreement (κ) between the clinical (normal and ocular hypertensive vs. glaucoma) and the software-provided classifications (normal and borderline vs. outside normal limits) were 0.3219 and 0.4442 for average RNFLT, and 0.2926 and 0.4977 for average GCC with versions 1 and 2, respectively (McNemar symmetry test, P≥0.289). No difference in average RNFLT and GCC classification (McNemar symmetry test, P≥0.727) and the number of eyes with at least 1 segmentation error (P≥0.109) was found between the software versions, respectively. Although GCC segmentation was improved with software version 6.12 compared with the current version in highly myopic eyes, this did not result in a significant change of the average RNFLT and GCC values, and did not significantly improve the software-provided classification for glaucoma.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui
2016-01-01
Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555
Balaban, M O; Aparicio, J; Zotarelli, M; Sims, C
2008-11-01
The average colors of mangos and apples were measured using machine vision. A method to quantify the perception of nonhomogeneous colors by sensory panelists was developed. Three colors out of several reference colors and their perceived percentage of the total sample area were selected by untrained panelists. Differences between the average colors perceived by panelists and those from the machine vision were reported as DeltaE values (color difference error). Effects of nonhomogeneity of color, and using real samples or their images in the sensory panels on DeltaE were evaluated. In general, samples with more nonuniform colors had higher DeltaE values, suggesting that panelists had more difficulty in evaluating more nonhomogeneous colors. There was no significant difference in DeltaE values between the real fruits and their screen image, therefore images can be used to evaluate color instead of the real samples.
Automated documentation error detection and notification improves anesthesia billing performance.
Spring, Stephen F; Sandberg, Warren S; Anupama, Shaji; Walsh, John L; Driscoll, William D; Raines, Douglas E
2007-01-01
Documentation of key times and events is required to obtain reimbursement for anesthesia services. The authors installed an information management system to improve record keeping and billing performance but found that a significant number of their records still could not be billed in a timely manner, and some records were never billed at all because they contained documentation errors. Computer software was developed that automatically examines electronic anesthetic records and alerts clinicians to documentation errors by alphanumeric page and e-mail. The software's efficacy was determined retrospectively by comparing billing performance before and after its implementation. Staff satisfaction with the software was assessed by survey. After implementation of this software, the percentage of anesthetic records that could never be billed declined from 1.31% to 0.04%, and the median time to correct documentation errors decreased from 33 days to 3 days. The average time to release an anesthetic record to the billing service decreased from 3.0+/-0.1 days to 1.1+/-0.2 days. More than 90% of staff found the system to be helpful and easier to use than the previous manual process for error detection and notification. This system allowed the authors to reduce the median time to correct documentation errors and the number of anesthetic records that were never billed by at least an order of magnitude. The authors estimate that these improvements increased their department's revenue by approximately $400,000 per year.
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-01-01
Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573
Anomalous annealing of floating gate errors due to heavy ion irradiation
NASA Astrophysics Data System (ADS)
Yin, Yanan; Liu, Jie; Sun, Youmei; Hou, Mingdong; Liu, Tianqi; Ye, Bing; Ji, Qinggang; Luo, Jie; Zhao, Peixiong
2018-03-01
Using the heavy ions provided by the Heavy Ion Research Facility in Lanzhou (HIRFL), the annealing of heavy-ion induced floating gate (FG) errors in 34 nm and 25 nm NAND Flash memories has been studied. The single event upset (SEU) cross section of FG and the evolution of the errors after irradiation depending on the ion linear energy transfer (LET) values, data pattern and feature size of the device are presented. Different rates of annealing for different ion LET and different pattern are observed in 34 nm and 25 nm memories. The variation of the percentage of different error patterns in 34 nm and 25 nm memories with annealing time shows that the annealing of FG errors induced by heavy-ion in memories will mainly take place in the cells directly hit under low LET ion exposure and other cells affected by heavy ions when the ion LET is higher. The influence of Multiple Cell Upsets (MCUs) on the annealing of FG errors is analyzed. MCUs with high error multiplicity which account for the majority of the errors can induce a large percentage of annealed errors.
Code of Federal Regulations, 2011 CFR
2011-01-01
... these same swine. Average lean percentage. The term “average lean percentage” means the value equal to the average percentage of the carcass weight comprised of lean meat for the swine slaughtered during the applicable reporting period. Whenever the packer changes the manner in which the average lean...
Code of Federal Regulations, 2012 CFR
2012-01-01
... these same swine. Average lean percentage. The term “average lean percentage” means the value equal to the average percentage of the carcass weight comprised of lean meat for the swine slaughtered during the applicable reporting period. Whenever the packer changes the manner in which the average lean...
NASA Astrophysics Data System (ADS)
Rasim; Junaeti, E.; Wirantika, R.
2018-01-01
Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.
MEASURING ECONOMIC GROWTH FROM OUTER SPACE.
Henderson, J Vernon; Storeygard, Adam; Weil, David N
2012-04-01
GDP growth is often measured poorly for countries and rarely measured at all for cities or subnational regions. We propose a readily available proxy: satellite data on lights at night. We develop a statistical framework that uses lights growth to augment existing income growth measures, under the assumption that measurement error in using observed light as an indicator of income is uncorrelated with measurement error in national income accounts. For countries with good national income accounts data, information on growth of lights is of marginal value in estimating the true growth rate of income, while for countries with the worst national income accounts, the optimal estimate of true income growth is a composite with roughly equal weights. Among poor-data countries, our new estimate of average annual growth differs by as much as 3 percentage points from official data. Lights data also allow for measurement of income growth in sub- and supranational regions. As an application, we examine growth in Sub Saharan African regions over the last 17 years. We find that real incomes in non-coastal areas have grown faster by 1/3 of an annual percentage point than coastal areas; non-malarial areas have grown faster than malarial ones by 1/3 to 2/3 annual percent points; and primate city regions have grown no faster than hinterland areas. Such applications point toward a research program in which "empirical growth" need no longer be synonymous with "national income accounts."
Çizmeci, Hülya; Çiprut, Ayça
2018-06-01
This study aimed to (1) evaluate the gap filling skills and reading mistakes of students with cochlear implants, and to (2) compare their results with those of their normal-hearing peers. The effects of implantation age and total time of cochlear implant use were analyzed in relation to the subjects' reading skills development. The study included 19 students who underwent cochlear implantation and 20 students with normal hearing, who were enrolled at the 6th to 8th grades. The subjects' ages ranged between 12 and 14 years old. Their reading skills were evaluated by using the Informal Reading Inventory. A significant relationship were found between implanted and normal-hearing students in terms of the percentages of reading error and the percentages of gap filling scores. The average order of the reading errors of students using cochlear implants was higher than that of normal-hearing students. As for the gap filling, the performances of implanted students in the passage are lower than those of their normal-hearing peers. No significant relationship was found between the variables tested in terms of age and duration of implantation on the reading performances of implanted students. Even if they were early implanted, there were significant differences in the reading performances of implanted students compared with those of their normal-hearing peers in older classes. Copyright © 2018 Elsevier B.V. All rights reserved.
Road traffic accidents prediction modelling: An analysis of Anambra State, Nigeria.
Ihueze, Chukwutoo C; Onwurah, Uchendu O
2018-03-01
One of the major problems in the world today is the rate of road traffic crashes and deaths on our roads. Majority of these deaths occur in low-and-middle income countries including Nigeria. This study analyzed road traffic crashes in Anambra State, Nigeria with the intention of developing accurate predictive models for forecasting crash frequency in the State using autoregressive integrated moving average (ARIMA) and autoregressive integrated moving average with explanatory variables (ARIMAX) modelling techniques. The result showed that ARIMAX model outperformed the ARIMA (1,1,1) model generated when their performances were compared using the lower Bayesian information criterion, mean absolute percentage error, root mean square error; and higher coefficient of determination (R-Squared) as accuracy measures. The findings of this study reveal that incorporating human, vehicle and environmental related factors in time series analysis of crash dataset produces a more robust predictive model than solely using aggregated crash count. This study contributes to the body of knowledge on road traffic safety and provides an approach to forecasting using many human, vehicle and environmental factors. The recommendations made in this study if applied will help in reducing the number of road traffic crashes in Nigeria. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David
2015-01-01
Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778
Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.
Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra
2014-04-01
To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA
NASA Astrophysics Data System (ADS)
Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.
2018-04-01
This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.
Wang, K W; Deng, C; Li, J P; Zhang, Y Y; Li, X Y; Wu, M C
2017-04-01
Tuberculosis (TB) affects people globally and is being reconsidered as a serious public health problem in China. Reliable forecasting is useful for the prevention and control of TB. This study proposes a hybrid model combining autoregressive integrated moving average (ARIMA) with a nonlinear autoregressive (NAR) neural network for forecasting the incidence of TB from January 2007 to March 2016. Prediction performance was compared between the hybrid model and the ARIMA model. The best-fit hybrid model was combined with an ARIMA (3,1,0) × (0,1,1)12 and NAR neural network with four delays and 12 neurons in the hidden layer. The ARIMA-NAR hybrid model, which exhibited lower mean square error, mean absolute error, and mean absolute percentage error of 0·2209, 0·1373, and 0·0406, respectively, in the modelling performance, could produce more accurate forecasting of TB incidence compared to the ARIMA model. This study shows that developing and applying the ARIMA-NAR hybrid model is an effective method to fit the linear and nonlinear patterns of time-series data, and this model could be helpful in the prevention and control of TB.
Anderson, N G; Jolley, I J; Wells, J E
2007-08-01
To determine the major sources of error in ultrasonographic assessment of fetal weight and whether they have changed over the last decade. We performed a prospective observational study in 1991 and again in 2000 of a mixed-risk pregnancy population, estimating fetal weight within 7 days of delivery. In 1991, the Rose and McCallum formula was used for 72 deliveries. Inter- and intraobserver agreement was assessed within this group. Bland-Altman measures of agreement from log data were calculated as ratios. We repeated the study in 2000 in 208 consecutive deliveries, comparing predicted and actual weights for 12 published equations using Bland-Altman and percentage error methods. We compared bias (mean percentage error), precision (SD percentage error), and their consistency across the weight ranges. 95% limits of agreement ranged from - 4.4% to + 3.3% for inter- and intraobserver estimates, but were - 18.0% to 24.0% for estimated and actual birth weight. There was no improvement in accuracy between 1991 and 2000. In 2000 only six of the 12 published formulae had overall bias within 7% and precision within 15%. There was greater bias and poorer precision in nearly all equations if the birth weight was < 1,000 g. Observer error is a relatively minor component of the error in estimating fetal weight; error due to the equation is a larger source of error. Improvements in ultrasound technology have not improved the accuracy of estimating fetal weight. Comparison of methods of estimating fetal weight requires statistical methods that can separate out bias, precision and consistency. Estimating fetal weight in the very low birth weight infant is subject to much greater error than it is in larger babies. Copyright (c) 2007 ISUOG. Published by John Wiley & Sons, Ltd.
26 CFR 1.410(b)-5 - Average benefit percentage test.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) INCOME TAX (CONTINUED) INCOME TAXES Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.410(b)-5 Average... average annual compensation; (C) Use of different testing ages; (D) Use of different fresh-start dates; (E... testing group determination method. A plan does not satisfy the average benefit percentage test using the...
Automated drug dispensing system reduces medication errors in an intensive care setting.
Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick
2010-12-01
We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, H; Sarkar, V; Paxton, A
Purpose: To explore the feasibility of supraclavicular field treatment by investigating the variation of junction position between tangential and supraclavicular fields during left breast radiation using DIBH technique. Methods: Six patients with left breast cancer treated using DIBH technique were included in this study. AlignRT system was used to track patient’s breast surface. During daily treatment, when the patient’s DIBH reached preset AlignRT tolerance of ±3mm for all principle directions (vertical, longitudinal, and lateral), the remaining longitudinal offset was recorded. The average with standard-deviation and the range of daily longitudinal offset for the entire treatment course were calculated for allmore » six patients (93 fractions totally). The ranges of average ± 1σ and 2σ were calculated, and they represent longitudinal field edge error with the confidence level of 68% and 95%. Based on these longitudinal errors, dose at junction between breast tangential and supraclavicular fields with variable gap/overlap sizes was calculated as a percentage of prescription (on a representative patient treatment plan). Results: The average of longitudinal offset for all patients is 0.16±1.32mm, and the range of longitudinal offset is −2.6 to 2.6mm. The range of longitudinal field edge error at 68% confidence level is −1.48 to 1.16mm, and at 95% confidence level is −2.80 to 2.48mm. With a 5mm and 1mm gap, the junction dose could be as low as 37.5% and 84.9% of prescription dose; with a 5mm and 1mm overlap, the junction dose could be as high as 169.3% and 117.6%. Conclusion: We observed longitudinal field edge error at 95% confidence level is about ±2.5mm, and the junction dose could reach 70% hot/cold between different DIBH. However, over the entire course of treatment, the average junction variation for all patients is within 0.2mm. The results from our study shows it is potentially feasible to treat supraclavicular field with breast tangents.« less
Incorporating GIS and remote sensing for census population disaggregation
NASA Astrophysics Data System (ADS)
Wu, Shuo-Sheng'derek'
Census data are the primary source of demographic data for a variety of researches and applications. For confidentiality issues and administrative purposes, census data are usually released to the public by aggregated areal units. In the United States, the smallest census unit is census blocks. Due to data aggregation, users of census data may have problems in visualizing population distribution within census blocks and estimating population counts for areas not coinciding with census block boundaries. The main purpose of this study is to develop methodology for estimating sub-block areal populations and assessing the estimation errors. The City of Austin, Texas was used as a case study area. Based on tax parcel boundaries and parcel attributes derived from ancillary GIS and remote sensing data, detailed urban land use classes were first classified using a per-field approach. After that, statistical models by land use classes were built to infer population density from other predictor variables, including four census demographic statistics (the Hispanic percentage, the married percentage, the unemployment rate, and per capita income) and three physical variables derived from remote sensing images and building footprints vector data (a landscape heterogeneity statistics, a building pattern statistics, and a building volume statistics). In addition to statistical models, deterministic models were proposed to directly infer populations from building volumes and three housing statistics, including the average space per housing unit, the housing unit occupancy rate, and the average household size. After population models were derived or proposed, how well the models predict populations for another set of sample blocks was assessed. The results show that deterministic models were more accurate than statistical models. Further, by simulating the base unit for modeling from aggregating blocks, I assessed how well the deterministic models estimate sub-unit-level populations. I also assessed the aggregation effects and the resealing effects on sub-unit estimates. Lastly, from another set of mixed-land-use sample blocks, a mixed-land-use model was derived and compared with a residential-land-use model. The results of per-field land use classification are satisfactory with a Kappa accuracy statistics of 0.747. Model Assessments by land use show that population estimates for multi-family land use areas have higher errors than those for single-family land use areas, and population estimates for mixed land use areas have higher errors than those for residential land use areas. The assessments of sub-unit estimates using a simulation approach indicate that smaller areas show higher estimation errors, estimation errors do not relate to the base unit size, and resealing improves all levels of sub-unit estimates.
NASA Technical Reports Server (NTRS)
Helmreich, R. L.
1991-01-01
Formal cockpit resource management training in crew coordination concepts increases the percentage of crews rated as above average in performance and decreases the percentage of crews rated as below average.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Masked and unmasked error-related potentials during continuous control and feedback
NASA Astrophysics Data System (ADS)
Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.
2018-06-01
The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR = 81.8% and average TNR = 96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR = 60.9% and average TNR = 58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.
Measurements of stem diameter: implications for individual- and stand-level errors.
Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D
2017-08-01
Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when monitoring relatively small changes in permanent sample plots (e.g. National Forest Inventories), noting that care is required in irregular-shaped, large-single-stemmed individuals, and (ii) use of a SDG to maximise efficiency when using inventory methods to assess basal area, and hence biomass or wood volume, at the stand scale (i.e. in studies of impacts of management or site quality) where there are budgetary constraints, noting the importance of sufficient sample sizes to ensure that the population sampled represents the true population.
NASA Astrophysics Data System (ADS)
Jiang, Y.; Chen, F.; Gao, Y.; Barlage, M. J.
2017-12-01
Snow cover in Qinghai-Tibetan Plateau (QTP) is a critical component of water cycle and affects regional climate of East Asia. Satellite data from three different sources (i.e., FY3A/B/C, MODIS and IMS) were used to analyze the QTP fractional-snow-cover (FSC) change and associated uncertainties in the last decade. To reduce the high percentage of cloud in FY3A/B/C and MODIS, a four-step cloud removal procedure was applied and effectively reduced the cloud percentage from 40.8-56.1% to 2.2-3.3%. The averaged error introduced by the cloud removal procedure was about 2% estimated by a random sampling method. Results show that the snow cover in QTP significantly decreased in recent 5 years. Three data sets (FY3B, MODIS and IMS) showed significant decreased annual FSC at all elevation bands from 2012-2016, and a significant shorter snow season with delayed snow onset and earlier melting. Both IMS and MODIS had a slightly decline annual FSC from 2000 to 3000 m, while MODIS FSC slightly decreased in 2002-2016 and IMS FSC slightly increased from 2006-2016 in the region with elevation higher than 3000 m. Results also show significant uncertainties among the five data sets (FY3A/B/C, MODIS, IMS), although they showed similar fluctuations of daily FSC. IMS had largest snow-cover extent and highest daily FSC due to its multi data sources. FY3A/C and MODIS (observed in the morning) had around 5% higher mean FSC than FY3B (observed in the afternoon) due to the 3 hours detection time gap. The relative error of daily FSC (taking MODIS as `truth') between FY3A/B/C, IMS and MODIS is 23%, -35%, 8% and 63%, respectively, averaged in five elevation bands in 2015-2017.
Erlewein, Daniel; Bruni, Tommaso; Gadebusch Bondio, Mariacarla
2018-06-07
In 1983, McIntyre and Popper underscored the need for more openness in dealing with errors in medicine. Since then, much has been written on individual medical errors. Furthermore, at the beginning of the 21st century, researchers and medical practitioners increasingly approached individual medical errors through health information technology. Hence, the question arises whether the attention of biomedical researchers shifted from individual medical errors to health information technology. We ran a study to determine publication trends concerning individual medical errors and health information technology in medical journals over the last 40 years. We used the Medical Subject Headings (MeSH) taxonomy in the database MEDLINE. Each year, we analyzed the percentage of relevant publications to the total number of publications in MEDLINE. The trends identified were tested for statistical significance. Our analysis showed that the percentage of publications dealing with individual medical errors increased from 1976 until the beginning of the 21st century but began to drop in 2003. Both the upward and the downward trends were statistically significant (P < 0.001). A breakdown by country revealed that it was the weight of the US and British publications that determined the overall downward trend after 2003. On the other hand, the percentage of publications dealing with health information technology doubled between 2003 and 2015. The upward trend was statistically significant (P < 0.001). The identified trends suggest that the attention of biomedical researchers partially shifted from individual medical errors to health information technology in the USA and the UK. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
Malpractice suits in chest radiology: an evaluation of the histories of 8265 radiologists.
Baker, Stephen R; Patel, Ronak H; Yang, Lily; Lelkes, Valdis M; Castro, Alejandro
2013-11-01
The aim of this study was to present rates of claims, causes of error, percentage of cases resulting in a judgment, and average payments made by radiologists in chest-related malpractice cases in a survey of 8265 radiologists. The malpractice histories of 8265 radiologists were evaluated from the credentialing files of One-Call Medical Inc., a preferred provider organization for computed tomography/magnetic resonance imaging in workers' compensation cases. Of the 8265 radiologists, 2680 (32.4%) had at least 1 malpractice suit. Of those who were sued, the rate of claims was 55.1 per 1000 person years. The rate of thorax-related suits was 6.6 claims per 1000 radiology practice years (95% confidence interval, 6.0-7.2). There were 496 suits encompassing 48 different causes. Errors in diagnosis comprised 78.0% of the causes. Failure to diagnose lung cancer was by far the most frequent diagnostic error, representing 211 cases or 42.5%. Of the 496 cases, an outcome was known in 417. Sixty-one percent of these were settled in favor of the plaintiff, with a mean payment of $277,230 (95% confidence interval, 226,967-338,614). Errors in diagnosis, and among them failure to diagnose lung cancer, were by far the most common reasons for initiating a malpractice suit against radiologists related to the thorax and its contents.
5 CFR 550.707 - Computation of severance pay fund.
Code of Federal Regulations, 2011 CFR
2011-01-01
... pay for standby duty regularly varies throughout the year, compute the average standby duty premium...), compute the weekly average percentage, and multiply that percentage by the weekly scheduled rate of pay in... hours in a pay status (excluding overtime hours) and multiply that average by the hourly rate of basic...
Securebox: a multibiopsy sample container for specimen identification and transport.
Palmieri, Beniamino; Sblendorio, Valeriana; Saleh, Farid; Al-Sebeih, Khalid
2008-01-01
To describe an original multicompartment disposable container for tissue surgical specimens or serial biopsy samples (Securebox). The increasing number of pathology samples from a single patient required for an accurate diagnosis led us to design and manufacture a unique container with 4 boxes; in each box 1 or more biopsy samples can be lodged. A magnification lens on a convex segment of the plastic framework allows inspection of macroscopic details of the recovered specimens. We investigated 400 randomly selected cases (compared with 400 controls) who underwent multiple biopsies from January 2006 to January 2007 to evaluate compliance with the new procedure and detect errors resulting from missing some of the multiple specimens or to technical mistakes during the procedure or delivery that might have compromised the final diagnosis. Using our Securebox, the percentage of oatients whose diagnosis failed or could not be reached was O.5% compared to 4% with the traditional method (p = 0.0012). Moreover, the percentage of medical and nursing staff who were satisfied with the Securebox compared to the traditional methodwas 85% vs. 15%, respectively (p < 0.0001). The average number of days spent bto reach a proper diagnosis based on the usage of the Securebox was 3.38 +/- 1.16 SD compared to 6.76 +/- 0.52 SD with the traditional method (p < 0.0001). The compact Securebox makes it safer and easier to introduce the specimens and to ship them to the pathology laboratories, reducing the risk of error.
MEASURING ECONOMIC GROWTH FROM OUTER SPACE
Henderson, J. Vernon; Storeygard, Adam; Weil, David N.
2013-01-01
GDP growth is often measured poorly for countries and rarely measured at all for cities or subnational regions. We propose a readily available proxy: satellite data on lights at night. We develop a statistical framework that uses lights growth to augment existing income growth measures, under the assumption that measurement error in using observed light as an indicator of income is uncorrelated with measurement error in national income accounts. For countries with good national income accounts data, information on growth of lights is of marginal value in estimating the true growth rate of income, while for countries with the worst national income accounts, the optimal estimate of true income growth is a composite with roughly equal weights. Among poor-data countries, our new estimate of average annual growth differs by as much as 3 percentage points from official data. Lights data also allow for measurement of income growth in sub- and supranational regions. As an application, we examine growth in Sub Saharan African regions over the last 17 years. We find that real incomes in non-coastal areas have grown faster by 1/3 of an annual percentage point than coastal areas; non-malarial areas have grown faster than malarial ones by 1/3 to 2/3 annual percent points; and primate city regions have grown no faster than hinterland areas. Such applications point toward a research program in which “empirical growth” need no longer be synonymous with “national income accounts.” PMID:25067841
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.
García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D
2016-04-01
It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P = 0·000). When the intervention was withdrawn, the mean percentage of reconciliation errors increased again to 27·72% (P = 0·008). The difference between the percentages of pre- and post-intervention periods was statistically significant (P = 0·000). Most reconciliation errors were due to omission (46·7%) or incomplete prescription (43·8%), and 35·3% of which could have caused harm to the patient. A computerized pharmaceutical intervention is shown to reduce reconciliation errors in the context of a high incidence of such errors. © 2016 John Wiley & Sons Ltd.
Gopal, S; Do, T; Pooni, J S; Martinelli, G
2014-03-01
The Mostcare monitor is a non-invasive cardiac output monitor. It has been well validated in cardiac surgical patients but there is limited evidence on its use in patients with severe sepsis and septic shock. The study included the first 22 consecutive patients with severe sepsis and septic shock in whom the floatation of a pulmonary artery catheter was deemed necessary to guide clinical management. Cardiac output measurements including cardiac output, cardiac index and stroke volume were simultaneously calculated and recorded from a thermodilution pulmonary artery catheter and from the Mostcare monitor respectively. The two methods of measuring cardiac output were compared by Bland-Altman statistics and linear regression analysis. A percentage error of less than 30% was defined as acceptable for this study. Bland-Altman analysis for cardiac output showed a Bias of 0.31 L.min-1, precision (=SD) of 1.97 L.min-1 and a percentage error of 62.54%. For Cardiac Index the bias was 0.21 L.min-1.m-2, precision of 1.10 L.min-1.m-2 and a percentage error of 64%. For stroke volume the bias was 5 mL, precision of 24.46 mL and percentage error of 70.21%. Linear regression produced a correlation coefficient r2 for cardiac output, cardiac index, and stroke volume, of 0.403, 0.306, and 0.3 respectively. Compared to thermodilution cardiac output, cardiac output studies obtained from the Mostcare monitor have an unacceptably high error rate. The Mostcare monitor demonstrated to be an unreliable monitoring device to measure cardiac output in patients with severe sepsis and septic shock on an intensive care unit.
The importance of temporal inequality in quantifying vegetated filter strip removal efficiencies
NASA Astrophysics Data System (ADS)
Gall, H. E.; Schultz, D.; Mejia, A.; Harman, C. J.; Raj, C.; Goslee, S.; Veith, T.; Patterson, P. H.
2017-12-01
Vegetated filter strips (VFSs) are best management practices (BMPs) commonly implemented adjacent to row-cropped fields to trap overland transport of sediment and other constituents often present in agricultural runoff. VFSs are generally reported to have high sediment removal efficiencies (i.e., 70 - 95%); however, these values are typically calculated as an average of removal efficiencies observed or simulated for individual events. We argue that due to: (i) positively correlated sediment concentration-discharge relationships; (ii) strong temporal inequality exhibited by sediment transport; and (iii) decreasing VFS performance with increasing flow rates, VFS removal efficiencies over annual time scales may be significantly lower than the per-event values or averages typically reported in the literature and used in decision-making models. By applying a stochastic approach to a two-component VFS model, we investigated the extent of the disparity between two calculation methods: averaging efficiencies from each event over the course of one year, versus reporting the total annual load reduction. We examined the effects of soil texture, concentration-discharge relationship, and VFS slope to reveal the potential errors that may be incurred by ignoring the effects of temporal inequality in quantifying VFS performance. Simulation results suggest that errors can be as low as < 2% and as high as > 20%, with the differences between the two methods of removal efficiency calculations greatest for: (i) soils with high percentage of fine particulates; (ii) VFSs with higher slopes; and (iii) strongly positive concentration-discharge relationships. These results can aid in annual-scale decision making for achieving downstream water quality goals.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
Use of scan overlap redundancy to enhance multispectral aircraft scanner data
NASA Technical Reports Server (NTRS)
Lindenlaub, J. C.; Keat, J.
1973-01-01
Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.
Wilberg, Dale E.; Stolp, Bernard J.
2005-01-01
This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
Yousef, Nadin; Yousef, Farah
2017-09-04
Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated medication doses to less than the global standard; as a result, it enhances patient safety. However, we hope other studies will be made later in hospitals to practically evaluate how much effective our proposed systematic strategy really is in comparison with other suggested remedies in this field.
Molina, Sergio L; Stodden, David F
2018-04-01
This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.
Mohebian, Zohreh; Farhang Dehghan, Somayeh; Dehghan, Habiballah
2018-01-01
Heat exposure and unsuitable lighting are two physical hazardous agents in many workplaces for which there are some evidences regarding their mental effects. The purpose of this study was to assess the combined effect of heat exposure and different lighting levels on the attention rate and reaction time in a climatic chamber. This study was conducted on 33 healthy students (17 M/16 F) with a mean (±SD) age of 22.1 ± 2.3 years. The attention and reaction time test were done by continuous performance test and the RT meter, respectively, in different exposure conditions including the dry temperatures (22°C and 37°C) and lighting levels (200, 500, and 1500 lux). Findings demonstrated that increase in heat and lighting level caused a decrease in average attention percentage and correct responses and increase in commission error, omission error, and response time ( P < 0.05). The average of simple, diagnostic, two-color selective, and two-sound selective reaction times increased after combined exposure to heat and lighting ( P < 0.05). The results of this study indicated that, in job task which requires using cognitive functions like attention, vigilance, concentration, cautiousness, and reaction time, the work environment must be optimized in terms of heat and lighting level.
Shmueli, Amir; Israeli, Avi
2013-02-20
Compared to OECD countries, Israel has a remarkably low percentage of GDP and of government expenditure spent on health, which are not reflected in worse national outcomes. Israel is also characterized by a relatively high share of GDP spent on security expenses and payment of public debt. To determine to what extent differences between Israel and the OECD countries in security expenses and payment of the public debt might account for the gaps in the percentage of GDP and of government expenditures spent on health. We compare the percentages of GDP and of government expenditures spent on health in the OECD countries with the respective percentages when using primary civilian GDP and government expenditures (i.e., when security expenses and interest payment are deducted). We compared Israel with the OECD average and examined the ranking of the OECD countries under the two measures over time. While as a percentage of GDP, the national expenditure on health in Israel was well below the average of the OECD countries, as a percentage of primary civilian GDP it was above the average until 2003 and below the average thereafter. When the OECD countries were ranked according to decreasing percent of GDP and of government expenditure spent on health, adjusting for security and debt payment expenditures changed the Israeli rank from 23rd to 17th and from 27th to 25th, respectively. Adjusting for security expenditures and interest payment, Israel's low spending on health as a percentage of GDP and as a percentage of government's spending increases and is closer to the OECD average. Further analysis should explore the effect of additional population and macroeconomic differences on the remaining gaps.
2013-01-01
Background Compared to OECD countries, Israel has a remarkably low percentage of GDP and of government expenditure spent on health, which are not reflected in worse national outcomes. Israel is also characterized by a relatively high share of GDP spent on security expenses and payment of public debt. Objectives To determine to what extent differences between Israel and the OECD countries in security expenses and payment of the public debt might account for the gaps in the percentage of GDP and of government expenditures spent on health. Methods We compare the percentages of GDP and of government expenditures spent on health in the OECD countries with the respective percentages when using primary civilian GDP and government expenditures (i.e., when security expenses and interest payment are deducted). We compared Israel with the OECD average and examined the ranking of the OECD countries under the two measures over time. Results While as a percentage of GDP, the national expenditure on health in Israel was well below the average of the OECD countries, as a percentage of primary civilian GDP it was above the average until 2003 and below the average thereafter. When the OECD countries were ranked according to decreasing percent of GDP and of government expenditure spent on health, adjusting for security and debt payment expenditures changed the Israeli rank from 23rd to 17th and from 27th to 25th, respectively. Conclusions Adjusting for security expenditures and interest payment, Israel's low spending on health as a percentage of GDP and as a percentage of government's spending increases and is closer to the OECD average. Further analysis should explore the effect of additional population and macroeconomic differences on the remaining gaps. PMID:23425013
Hospitalization for Suicide Ideation or Attempt: 2008-2015.
Plemmons, Gregory; Hall, Matthew; Doupnik, Stephanie; Gay, James; Brown, Charlotte; Browning, Whitney; Casey, Robert; Freundlich, Katherine; Johnson, David P; Lind, Carrie; Rehm, Kris; Thomas, Susan; Williams, Derek
2018-06-01
Suicide ideation (SI) and suicide attempts (SAs) have been reported as increasing among US children over the last decade. We examined trends in emergency and inpatient encounters for SI and SA at US children's hospitals from 2008 to 2015. We used retrospective analysis of administrative billing data from the Pediatric Health Information System database. There were 115 856 SI and SA encounters during the study period. Annual percentage of all visits for SI and SA almost doubled, increasing from 0.66% in 2008 to 1.82% in 2015 (average annual increase 0.16 percentage points [95% confidence intervals (CIs) 0.15 to 0.17]). Significant increases were noted in all age groups but were higher in adolescents 15 to 17 years old (average annual increase 0.27 percentage points [95% CI 0.23 to 0.30]) and adolescents 12 to 14 years old (average annual increase 0.25 percentage points [95% CI 0.21 to 0.27]). Increases were noted in girls (average annual increase 0.14 percentage points [95% CI 0.13 to 0.15]) and boys (average annual increase 0.10 percentage points [95% CI 0.09 to 0.11]), but were higher for girls. Seasonal variation was also observed, with the lowest percentage of cases occurring during the summer and the highest during spring and fall. Encounters for SI and SA at US children's hospitals increased steadily from 2008 to 2015 and accounted for an increasing percentage of all hospital encounters. Increases were noted across all age groups, with consistent seasonal patterns that persisted over the study period. The growing impact of pediatric mental health disorders has important implications for children's hospitals and health care delivery systems. Copyright © 2018 by the American Academy of Pediatrics.
Effects of the liver volume and donor steatosis on errors in the estimated standard liver volume.
Siriwardana, Rohan Chaminda; Chan, See Ching; Chok, Kenneth Siu Ho; Lo, Chung Mau; Fan, Sheung Tat
2011-12-01
An accurate assessment of donor and recipient liver volumes is essential in living donor liver transplantation. Many liver donors are affected by mild to moderate steatosis, and steatotic livers are known to have larger volumes. This study analyzes errors in liver volume estimation by commonly used formulas and the effects of donor steatosis on these errors. Three hundred twenty-five Asian donors who underwent right lobe donor hepatectomy were the subjects of this study. The percentage differences between the liver volumes from computed tomography (CT) and the liver volumes estimated with each formula (ie, the error percentages) were calculated. Five popular formulas were tested. The degrees of steatosis were categorized as follows: no steatosis [n = 178 (54.8%)], ≤ 10% steatosis [n = 128 (39.4%)], and >10% to 20% steatosis [n = 19 (5.8%)]. The median errors ranged from 0.6% (7 mL) to 24.6% (360 mL). The lowest was seen with the locally derived formula. All the formulas showed a significant association between the error percentage and the CT liver volume (P < 0.001). Overestimation was seen with smaller liver volumes, whereas underestimation was seen with larger volumes. The locally derived formula was most accurate when the liver volume was 1001 to 1250 mL. A multivariate analysis showed that the estimation error was dependent on the liver volume (P = 0.001) and the anthropometric measurement that was used in the calculation (P < 0.001) rather than steatosis (P ≥ 0.07). In conclusion, all the formulas have a similar pattern of error that is possibly related to the anthropometric measurement. Clinicians should be aware of this pattern of error and the liver volume with which their formula is most accurate. Copyright © 2011 American Association for the Study of Liver Diseases.
Koltun, G.F.; Holtschlag, David J.
2010-01-01
Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size and composition of the streamflow-gaging network affected the average apparent errors and variability of the estimated flows and (b) whether results for certain months were more variable than for others. The six flow lines were categorized into one of three types depending upon their network topology and position relative to operating streamflow-gaging stations. Statistical analysis of the model results indicates that (1) less precise (that is, more variable) estimates resulted from smaller streamflow-gaging networks as compared to larger streamflow-gaging networks, (2) precision of AFINCH flow estimates at an ungaged flow line is improved by operation of one or more streamflow gages upstream and (or) downstream in the enclosing basin, (3) no consistent seasonal trend in estimate variability was evident, and (4) flow lines from ungaged basins appeared to exhibit the smallest absolute apparent percent errors (APEs) and smallest changes in average APE as a function of increasing censoring level. The counterintuitive results described in item (4) above likely reflect both the nature of the base-streamflow estimate from which the errors were computed and insensitivity in the average model-derived estimates to changes in the streamflow-gaging-network size and composition. Another analysis demonstrated that errors for flow lines in ungaged basins have the potential to be much larger than indicated by their APEs if measured relative to their true (but unknown) flows. “Missing gage” analyses, based on examination of censoring subset results where the streamflow gage of interest was omitted from the calibration data set, were done to better understand the true error characteristics for ungaged flow lines as a function of network size. Results examined for 2 water years indicated that the probability of computing a monthly streamflow estimate within 10 percent of the true value with AFINCH decreased from greater than 0.9 at about a 10-percent network-censoring level to less than 0.6 as the censoring level approached 75 percent. In addition, estimates for typically dry months tended to be characterized by larger percent errors than typically wetter months.
An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm
NASA Astrophysics Data System (ADS)
Jacques, Robert; McNutt, Todd
2014-03-01
Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.
Extreme wind-wave modeling and analysis in the south Atlantic ocean
NASA Astrophysics Data System (ADS)
Campos, R. M.; Alves, J. H. G. M.; Guedes Soares, C.; Guimaraes, L. G.; Parente, C. E.
2018-04-01
A set of wave hindcasts is constructed using two different types of wind calibration, followed by an additional test retuning the input source term Sin in the wave model. The goal is to improve the simulation in extreme wave events in the South Atlantic Ocean without compromising average conditions. Wind fields are based on Climate Forecast System Reanalysis (CFSR/NCEP). The first wind calibration applies a simple linear regression model, with coefficients obtained from the comparison of CFSR against buoy data. The second is a method where deficiencies of the CFSR associated with severe sea state events are remedied, whereby "defective" winds are replaced with satellite data within cyclones. A total of six wind datasets forced WAVEWATCH-III and additional three tests with modified Sin in WAVEWATCH III lead to a total of nine wave hindcasts that are evaluated against satellite and buoy data for ambient and extreme conditions. The target variable considered is the significant wave height (Hs). The increase of sea-state severity shows a progressive increase of the hindcast underestimation which could be calculated as a function of percentiles. The wind calibration using a linear regression function shows similar results to the adjustments to Sin term (increase of βmax parameter) in WAVEWATCH-III - it effectively reduces the average bias of Hs but cannot avoid the increase of errors with percentiles. The use of blended scatterometer winds within cyclones could reduce the increasing wave hindcast errors mainly above the 93rd percentile and leads to a better representation of Hs at the peak of the storms. The combination of linear regression calibration of non-cyclonic winds with scatterometer winds within the cyclones generated a wave hindcast with small errors from calm to extreme conditions. This approach led to a reduction of the percentage error of Hs from 14% to less than 8% for extreme waves, while also improving the RMSE.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
NASA Astrophysics Data System (ADS)
Maheshwera Reddy Paturi, Uma; Devarasetti, Harish; Abimbola Fadare, David; Reddy Narala, Suresh Kumar
2018-04-01
In the present paper, the artificial neural network (ANN) and response surface methodology (RSM) are used in modeling of surface roughness in WS2 (tungsten disulphide) solid lubricant assisted minimal quantity lubrication (MQL) machining. The real time MQL turning of Inconel 718 experimental data considered in this paper was available in the literature [1]. In ANN modeling, performance parameters such as mean square error (MSE), mean absolute percentage error (MAPE) and average error in prediction (AEP) for the experimental data were determined based on Levenberg–Marquardt (LM) feed forward back propagation training algorithm with tansig as transfer function. The MATLAB tool box has been utilized in training and testing of neural network model. Neural network model with three input neurons, one hidden layer with five neurons and one output neuron (3-5-1 architecture) is found to be most confidence and optimal. The coefficient of determination (R2) for both the ANN and RSM model were seen to be 0.998 and 0.982 respectively. The surface roughness predictions from ANN and RSM model were related with experimentally measured values and found to be in good agreement with each other. However, the prediction efficacy of ANN model is relatively high when compared with RSM model predictions.
The 'Soil Cover App' - a new tool for fast determination of dead and living biomass on soil
NASA Astrophysics Data System (ADS)
Bauer, Thomas; Strauss, Peter; Riegler-Nurscher, Peter; Prankl, Johann; Prankl, Heinrich
2017-04-01
Worldwide many agricultural practices aim on soil protection strategies using living or dead biomass as soil cover. Especially for the case when management practices are focusing on soil erosion mitigation the effectiveness of these practices is directly driven by the amount of soil coverleft on the soil surface. Hence there is a need for quick and reliable methods of soil cover estimation not only for living biomass but particularly for dead biomass (mulch). Available methods for the soil cover measurement are either subjective, depending on an educated guess or time consuming, e.g., if the image is analysed manually at grid points. We therefore developed a mobile application using an algorithm based on entangled forest classification. The final output of the algorithm gives classified labels for each pixel of the input image as well as the percentage of each class which are living biomass, dead biomass, stones and soil. Our training dataset consisted of more than 250 different images and their annotated class information. Images have been taken in a set of different environmental conditions such as light, soil coverages from between 0% to 100%, different materials such as living plants, residues, straw material and stones. We compared the results provided by our mobile application with a data set of 180 images that had been manually annotated A comparison between both methods revealed a regression slope of 0.964 with a coefficient of determination R2 = 0.92, corresponding to an average error of about 4%. While average error of living plant classification was about 3%, dead residue classification resulted in an 8% error. Thus the new mobile application tool offers a fast and easy way to obtain information on the protective potential of a particular agricultural management site.
SU-F-T-431: Dosimetric Validation of Acuros XB Algorithm for Photon Dose Calculation in Water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, L; Yadav, G; Kishore, V
2016-06-15
Purpose: To validate the Acuros XB algorithm implemented in Eclipse Treatment planning system version 11 (Varian Medical System, Inc., Palo Alto, CA, USA) for photon dose calculation. Methods: Acuros XB is a Linear Boltzmann transport equation (LBTE) solver that solves LBTE equation explicitly and gives result equivalent to Monte Carlo. 6MV photon beam from Varian Clinac-iX (2300CD) was used for dosimetric validation of Acuros XB. Percentage depth dose (PDD) and profiles (at dmax, 5, 10, 20 and 30 cm) measurements were performed in water for field size ranging from 2×2,4×4, 6×6, 10×10, 20×20, 30×30 and 40×40 cm{sup 2}. Acuros XBmore » results were compared against measurements and anisotropic analytical algorithm (AAA) algorithm. Results: Acuros XB result shows good agreement with measurements, and were comparable to AAA algorithm. Result for PDD and profiles shows less than one percent difference from measurements, and from calculated PDD and profiles by AAA algorithm for all field size. TPS calculated Gamma error histogram values, average gamma errors in PDD curves before dmax and after dmax were 0.28, 0.15 for Acuros XB and 0.24, 0.17 for AAA respectively, average gamma error in profile curves in central region, penumbra region and outside field region were 0.17, 0.21, 0.42 for Acuros XB and 0.10, 0.22, 0.35 for AAA respectively. Conclusion: The dosimetric validation of Acuros XB algorithms in water medium was satisfactory. Acuros XB algorithm has potential to perform photon dose calculation with high accuracy, which is more desirable for modern radiotherapy environment.« less
Brito, Thiago V.; Morley, Steven K.
2017-10-25
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brito, Thiago V.; Morley, Steven K.
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
Measuring in-use ship emissions with international and U.S. federal methods.
Khan, M Yusuf; Ranganathan, Sindhuja; Agrawal, Harshit; Welch, William A; Laroo, Christopher; Miller, J Wayne; Cocker, David R
2013-03-01
Regulatory agencies have shifted their emphasis from measuring emissions during certification cycles to measuring emissions during actual use. Emission measurements in this research were made from two different large ships at sea to compare the Simplified Measurement Method (SMM) compliant with the International Maritime Organization (IMO) NOx Technical Code to the Portable Emission Measurement Systems (PEMS) compliant with the US. Environmental Protection Agency (EPA) 40 Code of Federal Regulations (CFR) Part 1065 for on-road emission testing. Emissions of nitrogen oxides (NOx), carbon dioxide (CO2), and carbon monoxide (CO) were measured at load points specified by the International Organization for Standardization (ISO) to compare the two measurement methods. The average percentage errors calculated for PEMS measurements were 6.5%, 0.6%, and 357% for NOx, CO2, and CO, respectively. The NOx percentage error of 6.5% corresponds to a 0.22 to 1.11 g/kW-hr error in moving from Tier III (3.4 g/kW-hr) to Tier I (17.0 g/kW-hr) emission limits. Emission factors (EFs) of NOx and CO2 measured via SMM were comparable to other studies and regulatory agencies estimates. However EF(PM2.5) for this study was up to 26% higher than that currently used by regulatory agencies. The PM2.5 was comprised predominantly of hydrated sulfate (70-95%), followed by organic carbon (11-14%), ash (6-11%), and elemental carbon (0.4-0.8%). This research provides direct comparison between the International Maritime Organization and U.S. Environmental Protection Agency reference methods for quantifying in-use emissions from ships. This research provides correlations for NOx, CO2, and CO measured by a PEMS unit (certified by U.S. EPA for on-road testing) against IMO's Simplified Measurement Method for on-board certification. It substantiates the measurements of NOx by PEMS and quantifies measurement error. This study also provides in-use modal and overall weighted emission factors of gaseous (NOx, CO, CO2, total hydrocarbons [THC], and SO2) and particulate pollutants from the main engine of a container ship, which are helpful in the development of emission inventory.
Complete Bouguer gravity map of the Medicine Lake Quadrangle, California
Finn, C.
1981-01-01
A mathematical technique, called kriging, was programmed for a computer to interpolate hydrologic data based on a network of measured values in west-central Kansas. The computer program generated estimated values at the center of each 1-mile section in the Western Kansas Groundwater Management District No. 1 and facilitated contouring of selected values that are needed in the effective management of ground water for irrigation. The kriging technique produced objective and reproducible maps that illustrated hydrologic conditions in the Ogallala aquifer, the principal source of water in west-central Kansas. Maps of the aquifer, which use a 3-year average, included the 1978-80 water-table altitudes, which ranged from about 2,580 to 3,720 feet; the 1978-80 saturated thicknesses, which ranged from about 0 to 250 feet; and the percentage changes in saturated thickness from 1950 to 1978-80, which ranged from about a 50-percent increase to a 100-percent decrease. A map showing errors of estimate also was provided as a measure of reliability for the 1978-80 water-table altitudes. Errors of estimate ranged from 2 to 24 feet. (USGS)
Development of Bio-impedance Analyzer (BIA) for Body Fat Calculation
NASA Astrophysics Data System (ADS)
Riyadi, Munawar A.; Nugraha, A.; Santoso, M. B.; Septaditya, D.; Prakoso, T.
2017-04-01
Common weight scales cannot assess body composition or determine fat mass and fat-fress mass that make up the body weight. This research propose bio-impedance analysis (BIA) tool capable to body composition assessment. This tool uses four electrodes, two of which are used for 50 kHz sine wave current flow to the body and the rest are used to measure the voltage produced by the body for impedance analysis. Parameters such as height, weight, age, and gender are provided individually. These parameters together with impedance measurements are then in the process to produce a body fat percentage. The experimental result shows impressive repeatability for successive measurements (stdev ≤ 0.25% fat mass). Moreover, result on the hand to hand node scheme reveals average absolute difference of total subjects between two analyzer tools of 0.48% (fat mass) with maximum absolute discrepancy of 1.22% (fat mass). On the other hand, the relative error normalized to Omron’s HBF-306 as comparison tool reveals less than 2% relative error. As a result, the system performance offers good evaluation tool for fat mass in the body.
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Consistency of gene starts among Burkholderia genomes
2011-01-01
Background Evolutionary divergence in the position of the translational start site among orthologous genes can have significant functional impacts. Divergence can alter the translation rate, degradation rate, subcellular location, and function of the encoded proteins. Results Existing Genbank gene maps for Burkholderia genomes suggest that extensive divergence has occurred--53% of ortholog sets based on Genbank gene maps had inconsistent gene start sites. However, most of these inconsistencies appear to be gene-calling errors. Evolutionary divergence was the most plausible explanation for only 17% of the ortholog sets. Correcting probable errors in the Genbank gene maps decreased the percentage of ortholog sets with inconsistent starts by 68%, increased the percentage of ortholog sets with extractable upstream intergenic regions by 32%, increased the sequence similarity of intergenic regions and predicted proteins, and increased the number of proteins with identifiable signal peptides. Conclusions Our findings highlight an emerging problem in comparative genomics: single-digit percent errors in gene predictions can lead to double-digit percentages of inconsistent ortholog sets. The work demonstrates a simple approach to evaluate and improve the quality of gene maps. PMID:21342528
Which skills and factors better predict winning and losing in high-level men's volleyball?
Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria
2013-09-01
The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.
Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime
2018-06-14
The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi
2016-11-01
Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems.
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems
NASA Astrophysics Data System (ADS)
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Accuracy of measurement in electrically evoked compound action potentials.
Hey, Matthias; Müller-Deile, Joachim
2015-01-15
Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wagner, Julia Y; Körner, Annmarie; Schulte-Uentrop, Leonie; Kubik, Mathias; Reichenspurner, Hermann; Kluge, Stefan; Reuter, Daniel A; Saugel, Bernd
2018-04-01
The CNAP technology (CNSystems Medizintechnik AG, Graz, Austria) allows continuous noninvasive arterial pressure waveform recording based on the volume clamp method and estimation of cardiac output (CO) by pulse contour analysis. We compared CNAP-derived CO measurements (CNCO) with intermittent invasive CO measurements (pulmonary artery catheter; PAC-CO) in postoperative cardiothoracic surgery patients. In 51 intensive care unit patients after cardiothoracic surgery, we measured PAC-CO (criterion standard) and CNCO at three different time points. We conducted two separate comparative analyses: (1) CNCO auto-calibrated to biometric patient data (CNCO bio ) versus PAC-CO and (2) CNCO calibrated to the first simultaneously measured PAC-CO value (CNCO cal ) versus PAC-CO. The agreement between the two methods was statistically assessed by Bland-Altman analysis and the percentage error. In a subgroup of patients, a passive leg raising maneuver was performed for clinical indications and we present the changes in PAC-CO and CNCO in four-quadrant plots (exclusion zone 0.5 L/min) in order to evaluate the trending ability of CNCO. The mean difference between CNCO bio and PAC-CO was +0.5 L/min (standard deviation ± 1.3 L/min; 95% limits of agreement -1.9 to +3.0 L/min). The percentage error was 49%. The concordance rate was 100%. For CNCOcal, the mean difference was -0.3 L/min (±0.5 L/min; -1.2 to +0.7 L/min) with a percentage error of 19%. In this clinical study in cardiothoracic surgery patients, CNCO cal showed good agreement when compared with PAC-CO. For CNCO bio , we observed a higher percentage error and good trending ability (concordance rate 100%).
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick
2007-08-01
To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (p<0.001; chi(2) test). MAEs occurred in 7.0% of 1473 non-intravenous doses pre-intervention and 4.3% of 1139 afterwards (p = 0.005; chi(2) test). Patient identity was not checked for 82.6% of 1344 doses pre-intervention and 18.9% of 1291 afterwards (p<0.001; chi(2) test). Medical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Tao, Qian; Milles, Julien; Zeppenfeld, Katja; Lamb, Hildo J; Bax, Jeroen J; Reiber, Johan H C; van der Geest, Rob J
2010-08-01
Accurate assessment of the size and distribution of a myocardial infarction (MI) from late gadolinium enhancement (LGE) MRI is of significant prognostic value for postinfarction patients. In this paper, an automatic MI identification method combining both intensity and spatial information is presented in a clear framework of (i) initialization, (ii) false acceptance removal, and (iii) false rejection removal. The method was validated on LGE MR images of 20 chronic postinfarction patients, using manually traced MI contours from two independent observers as reference. Good agreement was observed between automatic and manual MI identification. Validation results showed that the average Dice indices, which describe the percentage of overlap between two regions, were 0.83 +/- 0.07 and 0.79 +/- 0.08 between the automatic identification and the manual tracing from observer 1 and observer 2, and the errors in estimated infarct percentage were 0.0 +/- 1.9% and 3.8 +/- 4.7% compared with observer 1 and observer 2. The difference between the automatic method and manual tracing is in the order of interobserver variation. In conclusion, the developed automatic method is accurate and robust in MI delineation, providing an objective tool for quantitative assessment of MI in LGE MR imaging.
Models for estimating daily rainfall erosivity in China
NASA Astrophysics Data System (ADS)
Xie, Yun; Yin, Shui-qing; Liu, Bao-yuan; Nearing, Mark A.; Zhao, Ying
2016-04-01
The rainfall erosivity factor (R) represents the multiplication of rainfall energy and maximum 30 min intensity by event (EI30) and year. This rainfall erosivity index is widely used for empirical soil loss prediction. Its calculation, however, requires high temporal resolution rainfall data that are not readily available in many parts of the world. The purpose of this study was to parameterize models suitable for estimating erosivity from daily rainfall data, which are more widely available. One-minute resolution rainfall data recorded in sixteen stations over the eastern water erosion impacted regions of China were analyzed. The R-factor ranged from 781.9 to 8258.5 MJ mm ha-1 h-1 y-1. A total of 5942 erosive events from one-minute resolution rainfall data of ten stations were used to parameterize three models, and 4949 erosive events from the other six stations were used for validation. A threshold of daily rainfall between days classified as erosive and non-erosive was suggested to be 9.7 mm based on these data. Two of the models (I and II) used power law functions that required only daily rainfall totals. Model I used different model coefficients in the cool season (Oct.-Apr.) and warm season (May-Sept.), and Model II was fitted with a sinusoidal curve of seasonal variation. Both Model I and Model II estimated the erosivity index for average annual, yearly, and half-month temporal scales reasonably well, with the symmetric mean absolute percentage error MAPEsym ranging from 10.8% to 32.1%. Model II predicted slightly better than Model I. However, the prediction efficiency for the daily erosivity index was limited, with the symmetric mean absolute percentage error being 68.0% (Model I) and 65.7% (Model II) and Nash-Sutcliffe model efficiency being 0.55 (Model I) and 0.57 (Model II). Model III, which used the combination of daily rainfall amount and daily maximum 60-min rainfall, improved predictions significantly, and produced a Nash-Sutcliffe model efficiency for daily erosivity index prediction of 0.93. Thus daily rainfall data was generally sufficient for estimating annual average, yearly, and half-monthly time scales, while sub-daily data was needed when estimating daily erosivity values.
Validation of an automated colony counting system for group A Streptococcus.
Frost, H R; Tsoi, S K; Baker, C A; Laho, D; Sanderson-Smith, M L; Steer, A C; Smeesters, P R
2016-02-08
The practice of counting bacterial colony forming units on agar plates has long been used as a method to estimate the concentration of live bacteria in culture. However, due to the laborious and potentially error prone nature of this measurement technique, an alternative method is desirable. Recent technologic advancements have facilitated the development of automated colony counting systems, which reduce errors introduced during the manual counting process and recording of information. An additional benefit is the significant reduction in time taken to analyse colony counting data. Whilst automated counting procedures have been validated for a number of microorganisms, the process has not been successful for all bacteria due to the requirement for a relatively high contrast between bacterial colonies and growth medium. The purpose of this study was to validate an automated counting system for use with group A Streptococcus (GAS). Twenty-one different GAS strains, representative of major emm-types, were selected for assessment. In order to introduce the required contrast for automated counting, 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) dye was added to Todd-Hewitt broth with yeast extract (THY) agar. Growth on THY agar with TTC was compared with growth on blood agar and THY agar to ensure the dye was not detrimental to bacterial growth. Automated colony counts using a ProtoCOL 3 instrument were compared with manual counting to confirm accuracy over the stages of the growth cycle (latent, mid-log and stationary phases) and in a number of different assays. The average percentage differences between plating and counting methods were analysed using the Bland-Altman method. A percentage difference of ±10 % was determined as the cut-off for a critical difference between plating and counting methods. All strains measured had an average difference of less than 10 % when plated on THY agar with TTC. This consistency was also observed over all phases of the growth cycle and when plated in blood following bactericidal assays. Agreement between these methods suggest the use of an automated colony counting technique for GAS will significantly reduce time spent counting bacteria to enable a more efficient and accurate measurement of bacteria concentration in culture.
A Hybrid Model for Predicting the Prevalence of Schistosomiasis in Humans of Qianjiang City, China
Wang, Ying; Lu, Zhouqin; Tian, Lihong; Tan, Li; Shi, Yun; Nie, Shaofa; Liu, Li
2014-01-01
Backgrounds/Objective Schistosomiasis is still a major public health problem in China, despite the fact that the government has implemented a series of strategies to prevent and control the spread of the parasitic disease. Advanced warning and reliable forecasting can help policymakers to adjust and implement strategies more effectively, which will lead to the control and elimination of schistosomiasis. Our aim is to explore the application of a hybrid forecasting model to track the trends of the prevalence of schistosomiasis in humans, which provides a methodological basis for predicting and detecting schistosomiasis infection in endemic areas. Methods A hybrid approach combining the autoregressive integrated moving average (ARIMA) model and the nonlinear autoregressive neural network (NARNN) model to forecast the prevalence of schistosomiasis in the future four years. Forecasting performance was compared between the hybrid ARIMA-NARNN model, and the single ARIMA or the single NARNN model. Results The modelling mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model was 0.1869×10−4, 0.0029, 0.0419 with a corresponding testing error of 0.9375×10−4, 0.0081, 0.9064, respectively. These error values generated with the hybrid model were all lower than those obtained from the single ARIMA or NARNN model. The forecasting values were 0.75%, 0.80%, 0.76% and 0.77% in the future four years, which demonstrated a no-downward trend. Conclusion The hybrid model has high quality prediction accuracy in the prevalence of schistosomiasis, which provides a methodological basis for future schistosomiasis monitoring and control strategies in the study area. It is worth attempting to utilize the hybrid detection scheme in other schistosomiasis-endemic areas including other infectious diseases. PMID:25119882
Exploiting Domain Knowledge to Forecast Heating Oil Consumption
NASA Astrophysics Data System (ADS)
Corliss, George F.; Sakauchi, Tsuginosuke; Vitullo, Steven R.; Brown, Ronald H.
2011-11-01
The GasDay laboratory at Marquette University provides forecasts of energy consumption. One such service is the Heating Oil Forecaster, a service for a heating oil or propane delivery company. Accurate forecasts can help reduce the number of trucks and drivers while providing efficient inventory management by stretching the time between deliveries. Accurate forecasts help retain valuable customers. If a customer runs out of fuel, the delivery service incurs costs for an emergency delivery and often a service call. Further, the customer probably changes providers. The basic modeling is simple: Fit delivery amounts sk to cumulative Heating Degree Days (HDDk = Σmax(0,60 °F—daily average temperature)), with wind adjustment, for each delivery period: sk≈ŝk = β0+β1HDDk. For the first few deliveries, there is not enough data to provide a reliable estimate K = 1/β1 so we use Bayesian techniques with priors constructed from historical data. A fresh model is trained for each customer with each delivery, producing daily consumption forecasts using actual and forecast weather until the next delivery. In practice, a delivery may not fill the oil tank if the delivery truck runs out of oil or the automatic shut-off activates prematurely. Special outlier detection and recovery based on domain knowledge addresses this and other special cases. The error at each delivery is the difference between that delivery and the aggregate of daily forecasts using actual weather since the preceding delivery. Out-of-sample testing yields MAPE = 21.2% and an average error of 6.0% of tank capacity for Company A. The MAPE and an average error as a percentage of tank capacity for Company B are 31.5 % and 6.6 %, respectively. One heating oil delivery company who uses this forecasting service [1] reported instances of a customer running out of oil reduced from about 250 in 50,000 deliveries per year before contracting for our service to about 10 with our service. They delivered slightly more oil with 20 % fewer trucks and drivers, citing 250,000 annual savings in operational costs.
NASA Astrophysics Data System (ADS)
Wu, Wei; Xu, An-Ding; Liu, Hong-Bin
2015-01-01
Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.
Wu, Wei; Guo, Junqiao; An, Shuyi; Guan, Peng; Ren, Yangwu; Xia, Linzi; Zhou, Baosen
2015-01-01
Cases of hemorrhagic fever with renal syndrome (HFRS) are widely distributed in eastern Asia, especially in China, Russia, and Korea. It is proved to be a difficult task to eliminate HFRS completely because of the diverse animal reservoirs and effects of global warming. Reliable forecasting is useful for the prevention and control of HFRS. Two hybrid models, one composed of nonlinear autoregressive neural network (NARNN) and autoregressive integrated moving average (ARIMA) the other composed of generalized regression neural network (GRNN) and ARIMA were constructed to predict the incidence of HFRS in the future one year. Performances of the two hybrid models were compared with ARIMA model. The ARIMA, ARIMA-NARNN ARIMA-GRNN model fitted and predicted the seasonal fluctuation well. Among the three models, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of ARIMA-NARNN hybrid model was the lowest both in modeling stage and forecasting stage. As for the ARIMA-GRNN hybrid model, the MSE, MAE and MAPE of modeling performance and the MSE and MAE of forecasting performance were less than the ARIMA model, but the MAPE of forecasting performance did not improve. Developing and applying the ARIMA-NARNN hybrid model is an effective method to make us better understand the epidemic characteristics of HFRS and could be helpful to the prevention and control of HFRS.
Stone, Will J R; Campo, Joseph J; Ouédraogo, André Lin; Meerstein-Kessel, Lisette; Morlais, Isabelle; Da, Dari; Cohuet, Anna; Nsango, Sandrine; Sutherland, Colin J; van de Vegte-Bolmer, Marga; Siebelink-Stoter, Rianne; van Gemert, Geert-Jan; Graumans, Wouter; Lanke, Kjerstin; Shandling, Adam D; Pablo, Jozelyn V; Teng, Andy A; Jones, Sophie; de Jong, Roos M; Fabra-García, Amanda; Bradley, John; Roeffen, Will; Lasonder, Edwin; Gremo, Giuliana; Schwarzer, Evelin; Janse, Chris J; Singh, Susheel K; Theisen, Michael; Felgner, Phil; Marti, Matthias; Drakeley, Chris; Sauerwein, Robert; Bousema, Teun; Jore, Matthijs M
2018-04-11
The original version of this Article contained errors in Fig. 3. In panel a, bars from a chart depicting the percentage of antibody-positive individuals in non-infectious and infectious groups were inadvertently included in place of bars depicting the percentage of infectious individuals, as described in the Article and figure legend. However, the p values reported in the Figure and the resulting conclusions were based on the correct dataset. The corrected Fig. 3a now shows the percentage of infectious individuals in antibody-negative and -positive groups, in both the PDF and HTML versions of the Article. The incorrect and correct versions of Figure 3a are also presented for comparison in the accompanying Publisher Correction as Figure 1.The HTML version of the Article also omitted a link to Supplementary Data 6. The error has now been fixed and Supplementary Data 6 is available to download.
Work-related accidents among the Iranian population: a time series analysis, 2000–2011
Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774
Work-related accidents among the Iranian population: a time series analysis, 2000-2011.
Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.
The Zero Product Principle Error.
ERIC Educational Resources Information Center
Padula, Janice
1996-01-01
Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)
Algebra Students' Difficulty with Fractions: An Error Analysis
ERIC Educational Resources Information Center
Brown, George; Quinn, Robert J.
2006-01-01
An analysis of the 1990 National Assessment of Educational Progress (NAEP) found that only 46 percent of all high school seniors demonstrated success with a grasp of decimals, percentages, fractions and simple algebra. This article investigates error patterns that emerge as students attempt to answer questions involving the ability to apply…
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
42 CFR 413.157 - Return on equity capital of proprietary providers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES... percentage equal to one and one-half times the average of the rates of interest on special issues of public... inpatient hospital services is a percentage of the average of the rates of interest described in paragraph...
42 CFR 413.157 - Return on equity capital of proprietary providers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES... percentage equal to one and one-half times the average of the rates of interest on special issues of public... inpatient hospital services is a percentage of the average of the rates of interest described in paragraph...
42 CFR 413.157 - Return on equity capital of proprietary providers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES... percentage equal to one and one-half times the average of the rates of interest on special issues of public... inpatient hospital services is a percentage of the average of the rates of interest described in paragraph...
42 CFR 486.318 - Condition: Outcome measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... donation rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...
2017-01-01
Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
Najat, Dereen
2017-01-01
Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Estimation of open water evaporation using land-based meteorological data
NASA Astrophysics Data System (ADS)
Li, Fawen; Zhao, Yong
2017-10-01
Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.
A comparative effectiveness analysis of three continuous glucose monitors.
Damiano, Edward R; El-Khatib, Firas H; Zheng, Hui; Nathan, David M; Russell, Steven J
2013-02-01
To compare three continuous glucose monitoring (CGM) devices in subjects with type 1 diabetes under closed-loop blood glucose (BG) control. Six subjects with type 1 diabetes (age 52 ± 14 years, diabetes duration 32 ± 14 years) each participated in two 51-h closed-loop BG control experiments in the hospital. Venous plasma glucose (PG) measurements (GlucoScout, International Biomedical) obtained every 15 min (2,360 values) were paired in time with corresponding CGM glucose (CGMG) measurements obtained from three CGM devices, the Navigator (Abbott Diabetes Care), the Seven Plus (DexCom), and the Guardian (Medtronic), worn simultaneously by each subject. Errors in paired PG-CGMG measurements and data reporting percentages were obtained for each CGM device. The Navigator had the best overall accuracy, with an aggregate mean absolute relative difference (MARD) of all paired points of 11.8 ± 11.1% and an average MARD across all 12 experiments of 11.8 ± 3.8%. The Seven Plus and Guardian produced aggregate MARDs of all paired points of 16.5 ± 17.8% and 20.3 ± 18.0%, respectively, and average MARDs across all 12 experiments of 16.5 ± 6.7% and 20.2 ± 6.8%, respectively. Data reporting percentages, a measure of reliability, were 76% for the Seven Plus and nearly 100% for the Navigator and Guardian. A comprehensive head-to-head-to-head comparison of three CGM devices for BG values from 36 to 563 mg/dL revealed marked differences in performance characteristics that include accuracy, precision, and reliability. The Navigator outperformed the other two in these areas.
Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying
2012-03-01
This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj
2012-05-01
For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning
Trends in Self-Reported Sleep Duration among US Adults from 1985 to 2012.
Ford, Earl S; Cunningham, Timothy J; Croft, Janet B
2015-05-01
The trend in sleep duration in the United States population remains uncertain. Our objective was to examine changes in sleep duration from 1985 to 2012 among US adults. Trend analysis. Civilian noninstitutional population of the United States. 324,242 US adults aged ≥ 18 y of the National Health Interview Survey (1985, 1990, and 2004-2012). Sleep duration was defined on the basis of the question "On average, how many hours of sleep do you get in a 24-h period?" The age-adjusted mean sleep duration was 7.40 h (standard error [SE] 0.01) in 1985, 7.29 h (SE 0.01) in 1990, 7.18 h (SE 0.01) in 2004, and 7.18 h (SE 0.01) in 2012 (P 2012 versus 1985 < 0.001; P trend 2004-2012 = 0.982). The age-adjusted percentage of adults sleeping ≤ 6 h was 22.3% (SE 0.3) in 1985, 24.4% (SE 0.3) in 1990, 28.6% (SE 0.3) in 2004, and 29.2% (SE 0.3) in 2012 (P 2012 versus 1985 < 0.001; P trend 2004-2012 = 0.050). In 2012, approximately 70.1 million US adults reported sleeping ≤ 6 h. Since 1985, age-adjusted mean sleep duration has decreased slightly and the percentage of adults sleeping ≤ 6 h increased by 31%. Since 2004, however, mean sleep duration and the percentage of adults sleeping ≤ 6 h have changed little. © 2015 Associated Professional Sleep Societies, LLC.
A New Black Carbon Sensor for Dense Air Quality Monitoring Networks
Caubel, Julien J.; Cados, Troy E.; Kirchstetter, Thomas W.
2018-01-01
Low-cost air pollution sensors are emerging and increasingly being deployed in densely distributed wireless networks that provide more spatial resolution than is typical in traditional monitoring of ambient air quality. However, a low-cost option to measure black carbon (BC)—a major component of particulate matter pollution associated with adverse human health risks—is missing. This paper presents a new BC sensor designed to fill this gap, the Aerosol Black Carbon Detector (ABCD), which incorporates a compact weatherproof enclosure, solar-powered rechargeable battery, and cellular communication to enable long-term, remote operation. This paper also demonstrates a data processing methodology that reduces the ABCD’s sensitivity to ambient temperature fluctuations, and therefore improves measurement performance in unconditioned operating environments (e.g., outdoors). A fleet of over 100 ABCDs was operated outdoors in collocation with a commercial BC instrument (Magee Scientific, Model AE33) housed inside a regulatory air quality monitoring station. The measurement performance of the 105 ABCDs is comparable to the AE33. The fleet-average precision and accuracy, expressed in terms of mean absolute percentage error, are 9.2 ± 0.8% (relative to the fleet average data) and 24.6 ± 0.9% (relative to the AE33 data), respectively (fleet-average ± 90% confidence interval). PMID:29494528
A New Black Carbon Sensor for Dense Air Quality Monitoring Networks.
Caubel, Julien J; Cados, Troy E; Kirchstetter, Thomas W
2018-03-01
Low-cost air pollution sensors are emerging and increasingly being deployed in densely distributed wireless networks that provide more spatial resolution than is typical in traditional monitoring of ambient air quality. However, a low-cost option to measure black carbon (BC)-a major component of particulate matter pollution associated with adverse human health risks-is missing. This paper presents a new BC sensor designed to fill this gap, the Aerosol Black Carbon Detector (ABCD), which incorporates a compact weatherproof enclosure, solar-powered rechargeable battery, and cellular communication to enable long-term, remote operation. This paper also demonstrates a data processing methodology that reduces the ABCD's sensitivity to ambient temperature fluctuations, and therefore improves measurement performance in unconditioned operating environments (e.g., outdoors). A fleet of over 100 ABCDs was operated outdoors in collocation with a commercial BC instrument (Magee Scientific, Model AE33) housed inside a regulatory air quality monitoring station. The measurement performance of the 105 ABCDs is comparable to the AE33. The fleet-average precision and accuracy, expressed in terms of mean absolute percentage error, are 9.2 ± 0.8% (relative to the fleet average data) and 24.6 ± 0.9% (relative to the AE33 data), respectively (fleet-average ± 90% confidence interval).
NASA Astrophysics Data System (ADS)
Wang, Wen-Chuan; Chau, Kwok-Wing; Cheng, Chun-Tian; Qiu, Lin
2009-08-01
SummaryDeveloping a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R), Nash-Sutcliffe efficiency coefficient ( E), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Pietrzak, Robert H; Scott, James Cobb; Harel, Brian T; Lim, Yen Ying; Snyder, Peter J; Maruff, Paul
2012-11-01
Alprazolam is a benzodiazepine that, when administered acutely, results in impairments in several aspects of cognition, including attention, learning, and memory. However, the profile (i.e., component processes) that underlie alprazolam-related decrements in visual paired associate learning has not been fully explored. In this double-blind, placebo-controlled, randomized cross-over study of healthy older adults, we used a novel, "process-based" computerized measure of visual paired associate learning to examine the effect of a single, acute 1-mg dose of alprazolam on component processes of visual paired associate learning and memory. Acute alprazolam challenge was associated with a large magnitude reduction in visual paired associate learning and memory performance (d = 1.05). Process-based analyses revealed significant increases in distractor, exploratory, between-search, and within-search error types. Analyses of percentages of each error type suggested that, relative to placebo, alprazolam challenge resulted in a decrease in the percentage of exploratory errors and an increase in the percentage of distractor errors, both of which reflect memory processes. Results of this study suggest that acute alprazolam challenge decreases visual paired associate learning and memory performance by reducing the strength of the association between pattern and location, which may reflect a general breakdown in memory consolidation, with less evidence of reductions in executive processes (e.g., working memory) that facilitate visual paired associate learning and memory. Copyright © 2012 John Wiley & Sons, Ltd.
The Relationship among Correct and Error Oral Reading Rates and Comprehension.
ERIC Educational Resources Information Center
Roberts, Michael; Smith, Deborah Deutsch
1980-01-01
Eight learning disabled boys (10 to 12 years old) who were seriously deficient in both their oral reading and comprehension performances participated in the study which investigated, through an applied behavior analysis model, the interrelationships of three reading variables--correct oral reading rates, error oral reading rates, and percentage of…
1980-03-01
interpreting/smoothing data containing a significant percentage of gross errors, and thus is ideally suited for applications in automated image ... analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of the paper describes the application of
NASA Astrophysics Data System (ADS)
Li, Yinlin; Kundu, Bijoy K.
2018-03-01
The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged -1.4 ± 8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4 ± 5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic algorithm. The proposed method significantly improved the model estimation performance in terms of the accuracy of the MCIF and K i , as well as the convergence speed.
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
On the development of voluntary and reflexive components in human saccade generation.
Fischer, B; Biscaldi, M; Gezeck, S
1997-04-18
The saccadic performance of a large number (n = 281) of subjects of different ages (8-70 years) was studied applying two saccade tasks: the prosaccade overlap (PO) task and the antisaccade gap (AG) task. From the PO task, the mean reaction times and the percentage of express saccades were determined for each subject. From the AG task, the mean reaction time of the correct antisaccades and of the erratic prosaccades were measured. In addition, we determined the error rate and the mean correction time, i.e. the time between the end of the first erratic prosaccade and the following corrective antisaccade. These variables were measured separately for stimuli presented (in random order) at the right or left side. While strong correlations were seen between variables for the right and left sides, considerable side asymmetries were obtained from many subjects. A factor analysis revealed that the seven variables (six eye movement variables plus age) were mainly determined by only two factors, V and F. The V factor was dominated by the variables from the AG task (reaction time, correction time, error rate) the F factor by variables from the PO task (reaction time, percentage express saccades) and the reaction time of the errors (prosaccades!) from the AG task. The relationship between the percentage number of express saccades and the percentage number of errors was completely asymmetric: high numbers of express saccades were accompanied by high numbers of errors but not vice versa. Only the variables in the V factor covaried with age. A fast decrease of the antisaccade reaction time (by 50 ms), of the correction times (by 70 ms) and of the error rate (from 60 to 22%) was observed between age 9 and 15 years, followed by a further period of slower decrease until age 25 years. The mean time a subject needed to reach the side opposite to the stimulus as required by the antisaccade task decreased from approximately 350 to 250 ms until age 15 years and decreased further by 20 ms before it increased again to approximately 280 ms. At higher ages, there was a slight indication for a return development. Subjects with high error rates had long antisaccade latencies and needed a long time to reach the opposite side on error trials. The variables obtained from the PO task varied also significantly with age but by smaller amounts. The results are discussed in relation to the subsystems controlling saccade generation: a voluntary and a reflex component the latter being suppressed by active fixation. Both systems seem to develop differentially. The data offer a detailed baseline for clinical studies using the pro- and antisaccade tasks as an indication of functional impairments, circumscribed brain lesions, neurological and psychiatric diseases and cognitive deficits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noel, Camille E.; Gutti, VeeraRajesh; Bosch, Walter
Purpose: To quantify the potential impact of the Integrating the Healthcare Enterprise–Radiation Oncology Quality Assurance with Plan Veto (QAPV) on patient safety of external beam radiation therapy (RT) operations. Methods and Materials: An institutional database of events (errors and near-misses) was used to evaluate the ability of QAPV to prevent clinically observed events. We analyzed reported events that were related to Digital Imaging and Communications in Medicine RT plan parameter inconsistencies between the intended treatment (on the treatment planning system) and the delivered treatment (on the treatment machine). Critical Digital Imaging and Communications in Medicine RT plan parameters were identified.more » Each event was scored for importance using the Failure Mode and Effects Analysis methodology. Potential error occurrence (frequency) was derived according to the collected event data, along with the potential event severity, and the probability of detection with and without the theoretical implementation of the QAPV plan comparison check. Failure Mode and Effects Analysis Risk Priority Numbers (RPNs) with and without QAPV were compared to quantify the potential benefit of clinical implementation of QAPV. Results: The implementation of QAPV could reduce the RPN values for 15 of 22 (71%) of evaluated parameters, with an overall average reduction in RPN of 68 (range, 0-216). For the 6 high-risk parameters (>200), the average reduction in RPN value was 163 (range, 108-216). The RPN value reduction for the intermediate-risk (200 > RPN > 100) parameters was (0-140). With QAPV, the largest RPN value for “Beam Meterset” was reduced from 324 to 108. The maximum reduction in RPN value was for Beam Meterset (216, 66.7%), whereas the maximum percentage reduction was for Cumulative Meterset Weight (80, 88.9%). Conclusion: This analysis quantifies the value of the Integrating the Healthcare Enterprise–Radiation Oncology QAPV implementation in clinical workflow. We demonstrate that although QAPV does not provide a comprehensive solution for error prevention in RT, it can have a significant impact on a subset of the most severe clinically observed events.« less
NASA Astrophysics Data System (ADS)
Radziukynas, V.; Klementavičius, A.
2016-04-01
The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
Althomali, Talal A
2018-01-01
Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes.
Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.
Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie
2010-07-01
Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.
Jin, X; Yan, H; Han, C; Zhou, Y; Yi, J; Xie, C
2015-03-01
To investigate comparatively the percentage gamma passing rate (%GP) of two-dimensional (2D) and three-dimensional (3D) pre-treatment volumetric modulated arc therapy (VMAT) dosimetric verification and their correlation and sensitivity with percentage dosimetric errors (%DE). %GP of 2D and 3D pre-treatment VMAT quality assurance (QA) with different acceptance criteria was obtained by ArcCHECK® (Sun Nuclear Corporation, Melbourne, FL) for 20 patients with nasopharyngeal cancer (NPC) and 20 patients with oesophageal cancer. %DE were calculated from planned dose-volume histogram (DVH) and patients' predicted DVH calculated by 3DVH® software (Sun Nuclear Corporation). Correlation and sensitivity between %GP and %DE were investigated using Pearson's correlation coefficient (r) and receiver operating characteristics (ROCs). Relatively higher %DE on some DVH-based metrics were observed for both patients with NPC and oesophageal cancer. Except for 2%/2 mm criterion, the average %GPs for all patients undergoing VMAT were acceptable with average rates of 97.11% ± 1.54% and 97.39% ± 1.37% for 2D and 3D 3%/3 mm criteria, respectively. The number of correlations for 3D was higher than that for 2D (21 vs 8). However, the general correlation was still poor for all the analysed metrics (9 out of 26 for 3D 3%/3 mm criterion). The average area under the curve (AUC) of ROCs was 0.66 ± 0.12 and 0.71 ± 0.21 for 2D and 3D evaluations, respectively. There is a lack of correlation between %GP and %DE for both 2D and 3D pre-treatment VMAT dosimetric evaluation. DVH-based dose metrics evaluation obtained from 3DVH will provide more useful analysis. Correlation and sensitivity of %GP with %DE for VMAT QA were studied for the first time.
Jin, X; Yan, H; Han, C; Zhou, Y; Yi, J
2015-01-01
Objective: To investigate comparatively the percentage gamma passing rate (%GP) of two-dimensional (2D) and three-dimensional (3D) pre-treatment volumetric modulated arc therapy (VMAT) dosimetric verification and their correlation and sensitivity with percentage dosimetric errors (%DE). Methods: %GP of 2D and 3D pre-treatment VMAT quality assurance (QA) with different acceptance criteria was obtained by ArcCHECK® (Sun Nuclear Corporation, Melbourne, FL) for 20 patients with nasopharyngeal cancer (NPC) and 20 patients with oesophageal cancer. %DE were calculated from planned dose–volume histogram (DVH) and patients' predicted DVH calculated by 3DVH® software (Sun Nuclear Corporation). Correlation and sensitivity between %GP and %DE were investigated using Pearson's correlation coefficient (r) and receiver operating characteristics (ROCs). Results: Relatively higher %DE on some DVH-based metrics were observed for both patients with NPC and oesophageal cancer. Except for 2%/2 mm criterion, the average %GPs for all patients undergoing VMAT were acceptable with average rates of 97.11% ± 1.54% and 97.39% ± 1.37% for 2D and 3D 3%/3 mm criteria, respectively. The number of correlations for 3D was higher than that for 2D (21 vs 8). However, the general correlation was still poor for all the analysed metrics (9 out of 26 for 3D 3%/3 mm criterion). The average area under the curve (AUC) of ROCs was 0.66 ± 0.12 and 0.71 ± 0.21 for 2D and 3D evaluations, respectively. Conclusions: There is a lack of correlation between %GP and %DE for both 2D and 3D pre-treatment VMAT dosimetric evaluation. DVH-based dose metrics evaluation obtained from 3DVH will provide more useful analysis. Advances in knowledge: Correlation and sensitivity of %GP with %DE for VMAT QA were studied for the first time. PMID:25494412
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.
Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick
2007-01-01
Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (p<0.001; χ2 test). MAEs occurred in 7.0% of 1473 non‐intravenous doses pre‐intervention and 4.3% of 1139 afterwards (p = 0.005; χ2 test). Patient identity was not checked for 82.6% of 1344 doses pre‐intervention and 18.9% of 1291 afterwards (p<0.001; χ2 test). Medical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676
NASA Astrophysics Data System (ADS)
Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.
2017-06-01
Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Kerber, A. G.; Sellers, P. J.
1993-01-01
Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.
Experimental fault characterization of a neural network
NASA Technical Reports Server (NTRS)
Tan, Chang-Huong
1990-01-01
The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.
Bertacche, Vittorio; Pini, Elena; Stradi, Riccardo; Stratta, Fabio
2006-01-01
The purpose of this study is the development of a quantification method to detect the amount of amorphous cyclosporine using Fourier transform infrared (FTIR) spectroscopy. The mixing of different percentages of crystalline cyclosporine with amorphous cyclosporine was used to obtain a set of standards, composed of cyclosporine samples characterized by different percentages of amorphous cyclosporine. Using a wavelength range of 450-4,000 cm(-1), FTIR spectra were obtained from samples in potassium bromide pellets and then a partial least squares (PLS) model was exploited to correlate the features of the FTIR spectra with the percentage of amorphous cyclosporine in the samples. This model gave a standard error of estimate (SEE) of 0.3562, with an r value of 0.9971 and a standard error of prediction (SEP) of 0.4168, which derives from the cross validation function used to check the precision of the model. Statistical values reveal the applicability of the method to the quantitative determination of amorphous cyclosporine in crystalline cyclosporine samples.
Lead theft--a study of the "uniqueness" of lead from church roofs.
Bond, John W; Hainsworth, Sarah V; Lau, Tien L
2013-07-01
In the United Kingdom, theft of lead is common, particularly from churches and other public buildings with lead roofs. To assess the potential to distinguish lead from different sources, 41 samples of lead from 24 church roofs in Northamptonshire, U.K, have been analyzed for relative abundance of trace elements and isotopes of lead using X-ray fluorescence (XRF) and inductively coupled plasma mass spectrometry, respectively. XRF revealed the overall presence of 12 trace elements with the four most abundant, calcium, phosphorus, silicon, and sulfur, showing a large weight percentage standard error of the mean of all samples suggesting variation in the weight percentage of these elements between different church roofs. Multiple samples from the same roofs, but different lead sheets, showed much lower weight percentage standard errors of the mean suggesting similar trace element concentrations. Lead isotope ratios were similar for all samples. Factors likely to affect the occurrence of these trace elements are discussed. © 2013 American Academy of Forensic Sciences.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
NASA Astrophysics Data System (ADS)
Zempila, Melina-Maria; Taylor, Michael; Bais, Alkiviadis; Kazadzis, Stelios
2016-10-01
We report on the construction of generic models to calculate photosynthetically active radiation (PAR) from global horizontal irradiance (GHI), and vice versa. Our study took place at stations of the Greek UV network (UVNET) and the Hellenic solar energy network (HNSE) with measurements from NILU-UV multi-filter radiometers and CM pyranometers, chosen due to their long (≈1 M record/site) high temporal resolution (≈1 min) record that captures a broad range of atmospheric environments and cloudiness conditions. The uncertainty of the PAR measurements is quantified to be ±6.5% while the uncertainty involved in GHI measurements is up to ≈±7% according to the manufacturer. We show how multi-linear regression and nonlinear neural network (NN) models, trained at a calibration site (Thessaloniki) can be made generic provided that the input-output time series are processed with multi-channel singular spectrum analysis (M-SSA). Without M-SSA, both linear and nonlinear models perform well only locally. M-SSA with 50 time-lags is found to be sufficient for identification of trend, periodic and noise components in aerosol, cloud parameters and irradiance, and to construct regularized noise models of PAR from GHI irradiances. Reconstructed PAR and GHI time series capture ≈95% of the variance of the cross-validated target measurements and have median absolute percentage errors <2%. The intra-site median absolute error of M-SSA processed models were ≈8.2±1.7 W/m2 for PAR and ≈9.2±4.2 W/m2 for GHI. When applying the models trained at Thessaloniki to other stations, the average absolute mean bias between the model estimates and measured values was found to be ≈1.2 W/m2 for PAR and ≈0.8 W/m2 for GHI. For the models, percentage errors are well within the uncertainty of the measurements at all sites. Generic NN models were found to perform marginally better than their linear counterparts.
Wu, Wei; Guo, Junqiao; An, Shuyi; Guan, Peng; Ren, Yangwu; Xia, Linzi; Zhou, Baosen
2015-01-01
Background Cases of hemorrhagic fever with renal syndrome (HFRS) are widely distributed in eastern Asia, especially in China, Russia, and Korea. It is proved to be a difficult task to eliminate HFRS completely because of the diverse animal reservoirs and effects of global warming. Reliable forecasting is useful for the prevention and control of HFRS. Methods Two hybrid models, one composed of nonlinear autoregressive neural network (NARNN) and autoregressive integrated moving average (ARIMA) the other composed of generalized regression neural network (GRNN) and ARIMA were constructed to predict the incidence of HFRS in the future one year. Performances of the two hybrid models were compared with ARIMA model. Results The ARIMA, ARIMA-NARNN ARIMA-GRNN model fitted and predicted the seasonal fluctuation well. Among the three models, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of ARIMA-NARNN hybrid model was the lowest both in modeling stage and forecasting stage. As for the ARIMA-GRNN hybrid model, the MSE, MAE and MAPE of modeling performance and the MSE and MAE of forecasting performance were less than the ARIMA model, but the MAPE of forecasting performance did not improve. Conclusion Developing and applying the ARIMA-NARNN hybrid model is an effective method to make us better understand the epidemic characteristics of HFRS and could be helpful to the prevention and control of HFRS. PMID:26270814
Hsu, Chi-Pin; Lin, Shang-Chih; Shih, Kao-Shang; Huang, Chang-Hung; Lee, Chian-Her
2014-12-01
After total knee replacement, the model-based Roentgen stereophotogrammetric analysis (RSA) technique has been used to monitor the status of prosthetic wear, misalignment, and even failure. However, the overlap of the prosthetic outlines inevitably increases errors in the estimation of prosthetic poses due to the limited amount of available outlines. In the literature, quite a few studies have investigated the problems induced by the overlapped outlines, and manual adjustment is still the mainstream. This study proposes two methods to automate the image processing of overlapped outlines prior to the pose registration of prosthetic models. The outline-separated method defines the intersected points and segments the overlapped outlines. The feature-recognized method uses the point and line features of the remaining outlines to initiate registration. Overlap percentage is defined as the ratio of overlapped to non-overlapped outlines. The simulated images with five overlapping percentages are used to evaluate the robustness and accuracy of the proposed methods. Compared with non-overlapped images, overlapped images reduce the number of outlines available for model-based RSA calculation. The maximum and root mean square errors for a prosthetic outline are 0.35 and 0.04 mm, respectively. The mean translation and rotation errors are 0.11 mm and 0.18°, respectively. The errors of the model-based RSA results are increased when the overlap percentage is beyond about 9%. In conclusion, both outline-separated and feature-recognized methods can be seamlessly integrated to automate the calculation of rough registration. This can significantly increase the clinical practicability of the model-based RSA technique.
[Errors in Peruvian medical journals references].
Huamaní, Charles; Pacheco-Romero, José
2009-01-01
References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.
Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-01-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.
Computing resonant frequency of C-shaped compact microstrip antennas by using ANFIS
NASA Astrophysics Data System (ADS)
Akdagli, Ali; Kayabasi, Ahmet; Develi, Ibrahim
2015-03-01
In this work, the resonant frequency of C-shaped compact microstrip antennas (CCMAs) operating at UHF band is computed by using the adaptive neuro-fuzzy inference system (ANFIS). For this purpose, 144 CCMAs with various relative dielectric constants and different physical dimensions were simulated by the XFDTD software package based on the finite-difference time domain (FDTD) method. One hundred and twenty-nine CCMAs were employed for training, while the remaining 15 CCMAs were used for testing of the ANFIS model. Average percentage error (APE) values were obtained as 0.8413% and 1.259% for training and testing, respectively. In order to demonstrate its validity and accuracy, the proposed ANFIS model was also tested over the simulation data given in the literature, and APE was obtained as 0.916%. These results show that ANFIS can be successfully used to compute the resonant frequency of CCMAs.
Trends in incidence of borderline ovarian tumors in Denmark 1978-2006.
Hannibal, Charlotte Gerd; Huusom, Lene Drasbek; Kjaerbye-Thygesen, Anette; Tabor, Ann; Kjaer, Susanne K
2011-04-01
To examine period-, age- and histology-specific trends in the incidence rate of borderline ovarian tumors in Denmark in 1978-2006. Register-based cohort study. Denmark 1978-2006. 5079 women diagnosed with a borderline ovarian tumor in at least one of two nationwide registries (4312 epithelial tumors and 767 non-epithelial/unspecified tumors). Estimation of overall incidence rates and period-, age- and histology-specific incidence rates. Age-adjustment was done using the World Standard POPULATION. To evaluate incidence trends over time, we estimated average annual percentage change and 95% confidence intervals (CI) using log-linear Poisson models. Age-standardized and age-specific incidence rates and average annual percentage change. The incidence of epithelial borderline ovarian tumors increased from 2.6 to 5.5 per 100,000 women-years between 1978 and 2006, with an average annual percentage change of 2.6% (95% CI: 2.2-3.0). The median age at diagnosis was 52 years. Women 40 years or older had a higher average annual percentage change than women younger than 40 years. Most tumors were mucinous (49.9%) and serous tumors (44.4%). Women with mucinous tumors were younger at diagnosis (50 years) compared with women with serous tumors (53 years). Women with serous tumors had a higher average annual percentage incidence change than women with mucinous tumors. The incidence rate of borderline ovarian tumors increased significantly in Denmark in 1978-2006. In line with results for ovarian cancer, Denmark had a higher incidence rate of borderline ovarian tumors compared with the other Nordic countries in 1978-2006. © 2011 The Authors Acta Obstetricia et Gynecologica Scandinavica© 2011 Nordic Federation of Societies of Obstetrics and Gynecology.
Burke, Danielle L; Ensor, Joie; Snell, Kym I E; van der Windt, Danielle; Riley, Richard D
2018-06-01
Percentage study weights in meta-analysis reveal the contribution of each study toward the overall summary results and are especially important when some studies are considered outliers or at high risk of bias. In meta-analyses of test accuracy reviews, such as a bivariate meta-analysis of sensitivity and specificity, the percentage study weights are not currently derived. Rather, the focus is on representing the precision of study estimates on receiver operating characteristic plots by scaling the points relative to the study sample size or to their standard error. In this article, we recommend that researchers should also provide the percentage study weights directly, and we propose a method to derive them based on a decomposition of Fisher information matrix. This method also generalises to a bivariate meta-regression so that percentage study weights can also be derived for estimates of study-level modifiers of test accuracy. Application is made to two meta-analyses examining test accuracy: one of ear temperature for diagnosis of fever in children and the other of positron emission tomography for diagnosis of Alzheimer's disease. These highlight that the percentage study weights provide important information that is otherwise hidden if the presentation only focuses on precision based on sample size or standard errors. Software code is provided for Stata, and we suggest that our proposed percentage weights should be routinely added on forest and receiver operating characteristic plots for sensitivity and specificity, to provide transparency of the contribution of each study toward the results. This has implications for the PRISMA-diagnostic test accuracy guidelines that are currently being produced. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J
2006-01-01
The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.
Clark, Ross A; Paterson, Kade; Ritchie, Callan; Blundell, Simon; Bryant, Adam L
2011-03-01
Commercial timing light systems (CTLS) provide precise measurement of athletes running velocity, however they are often expensive and difficult to transport. In this study an inexpensive, wireless and portable timing light system was created using the infrared camera in Nintendo Wii hand controllers (NWHC). System creation with gold-standard validation. A Windows-based software program using NWHC to replicate a dual-beam timing gate was created. Firstly, data collected during 2m walking and running trials were validated against a 3D kinematic system. Secondly, data recorded during 5m running trials at various intensities from standing or flying starts were compared to a single beam CTLS and the independent and average scores of three handheld stopwatch (HS) operators. Intraclass correlation coefficient and Bland-Altman plots were used to assess validity. Absolute error quartiles and percentage of trials in absolute error threshold ranges were used to determine accuracy. The NWHC system was valid when compared against the 3D kinematic system (ICC=0.99, median absolute error (MAR)=2.95%). For the flying 5m trials the NWHC system possessed excellent validity and precision (ICC=0.97, MAR<3%) when compared with the CTLS. In contrast, the NWHC system and the HS values during standing start trials possessed only modest validity (ICC<0.75) and accuracy (MAR>8%). A NWHC timing light system is inexpensive, portable and valid for assessing running velocity. Errors in the 5m standing start trials may have been due to erroneous event detection by either the commercial or NWHC-based timing light systems. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties
NASA Astrophysics Data System (ADS)
Hua, W.
2017-12-01
Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.
NASA Astrophysics Data System (ADS)
Liu, Yao; Liu, Baoliang; Lei, Jilin; Guan, Changtao; Huang, Bin
2017-07-01
A three-dimensional numerical model was established to simulate the hydrodynamics within an octagonal tank of a recirculating aquaculture system. The realizable k- ɛ turbulence model was applied to describe the flow, the discrete phase model (DPM) was applied to generate particle trajectories, and the governing equations are solved using the finite volume method. To validate this model, the numerical results were compared with data obtained from a full-scale physical model. The results show that: (1) the realizable k- ɛ model applied for turbulence modeling describes well the flow pattern in octagonal tanks, giving an average relative error of velocities between simulated and measured values of 18% from contour maps of velocity magnitudes; (2) the DPM was applied to obtain particle trajectories and to simulate the rate of particle removal from the tank. The average relative error of the removal rates between simulated and measured values was 11%. The DPM can be used to assess the self-cleaning capability of an octagonal tank; (3) a comprehensive account of the hydrodynamics within an octagonal tank can be assessed from simulations. The velocity distribution was uniform with an average velocity of 15 cm/s; the velocity reached 0.8 m/s near the inlet pipe, which can result in energy losses and cause wall abrasion; the velocity in tank corners was more than 15 cm/s, which suggests good water mixing, and there was no particle sedimentation. The percentage of particle removal for octagonal tanks was 90% with the exception of a little accumulation of ≤ 5 mm particle in the area between the inlet pipe and the wall. This study demonstrated a consistent numerical model of the hydrodynamics within octagonal tanks that can be further used in their design and optimization as well as promote the wide use of computational fluid dynamics in aquaculture engineering.
Ellen M. Hines; Janet Franklin
1997-01-01
Using a Geographic Information System (GIS), a sensitivity analysis was performed on estimated mapping errors in vegetation type, forest canopy cover percentage, and tree crown size to determine the possible effects error in these data might have on delineating suitable habitat for the California Spotted Owl (Strix occidentalis occidentalis) in...
Levine, C; Younglove, T; Barth, M
2000-10-01
Recent studies have shown large increases in vehicle emissions when the air conditioner (AC) compressor is engaged. Factors that affect the compressor-on percentage can have a significant impact on vehicle emissions and can also lead to prediction errors in current emissions models if not accounted for properly. During 1996 and 1997, the University of California, Riverside, College of Engineering-Center for Environmental Research and Technology (CE-CERT) conducted a vehicle activity study for the California Air Resources Board (CARB) in the Sacramento, CA, region. The vehicles were randomly selected from all registered vehicles in the region. As part of this study, ten vehicles were instrumented to collect AC compressor on/off data on a second-by-second basis in the summer of 1997. Temperature and humidity data were obtained and averaged on an hourly basis. The ten drivers were asked to complete a short survey about AC operational preferences. This paper examines the effects of temperature, humidity, refrigerant type, and driver preferences on air conditioning compressor activity. Overall, AC was in use in 69.1% of the trips monitored. The compressor was on an average of 64% of the time during the trips. The personal preference settings had a significant effect on the AC compressor-on percentage but did not interact with temperature. The refrigerant types, however, exhibited a differential response across temperature, which may necessitate separate modeling of the R12 refrigerant-equipped vehicles from the R134A-equipped vehicles. It should be noted that some older vehicles do get retrofitted with new compressors that use R134A; however, none of the vehicles in this study had been retrofitted.
A univariate model of river water nitrate time series
NASA Astrophysics Data System (ADS)
Worrall, F.; Burt, T. P.
1999-01-01
Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.
The correlation between indoor and in soil radon concentrations in a desert climate
NASA Astrophysics Data System (ADS)
Al-Khateeb, H. M.; Aljarrah, K. M.; Alzoubi, F. Y.; Alqadi, M. K.; Ahmad, A. A.
2017-01-01
This study examines the levels and the correlation between indoor and in soil radon concentration in a desert climate. The measurements are carried out, in Jordan desert in AlMafraq district, using the passive integrated technique. An intelligent automated tracks counting system, modified recently by our group, is used to estimate the overlapping tracks and to decrease the counting percentage error. Results show that radon concentration in soil expands from 4.09 to 11.30 kBq m-3, with an average of 7.53 kBq m-3. Indoor radon concentrations vary from 20.2 Bq m-3 in the AlMafraq city to 46.7 Bq m-3 in Housha village and with an average of 29.6 Bq m-3. All of individual indoor radon concentrations are lower than the limit (100 Bq m-3) recommended by WHO except two dwellings in Housha village which found being higher than this limit. A moderate linear correlation (R2=0.66) was observed between indoor and in soil radon concentrations in the investigated region. Our results showed that an in soil radon measurement can be a satisfactory predictor for indoor radon potential.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department.
Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department
Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842
Reilly, K A; Beard, D J; Barker, K L; Dodd, C A F; Price, A J; Murray, D W
2005-10-01
Unicompartmental knee arthroplasty (UKA) is appropriate for one in four patients with osteoarthritic knees. This study was performed to compare the safety, effectiveness and economic viability of a new accelerated protocol with current standard care in a state healthcare system. A single blind RCT design was used. Eligible patients were screened for NSAID tolerance, social circumstances and geographical location before allocation to an accelerated recovery group (A) or standard care group (S). Primary outcome was the Oxford Knee Assessment at 6 months post operation, compared using independent Mann-Whitney U-tests. A simple difference in costs incurred was calculated. The study power was sufficient to avoid type 2 errors. Forty-one patients were included. The average stay for Group A was 1.5 days. Group S averaged 4.3 days. No significant difference in outcomes was found between groups. The new protocol achieved cost savings of 27% and significantly reduced hospital bed occupancy. In addition, patient satisfaction was assessed as greater with the accelerated discharge than with the routine discharge time. The strict inclusion criteria meant that 75% of eligible patients were excluded. However, a large percentage of these were due to the distances patients lived from the hospital.
Land use policy and agricultural water management of the previous half of century in Africa
NASA Astrophysics Data System (ADS)
Valipour, Mohammad
2015-12-01
This paper examines land use policy and agricultural water management in Africa from 1962 to 2011. For this purpose, data were gathered from Food and Agriculture Organization of the United Nations (FAO) and the World Bank Group. Using the FAO database, ten indices were selected: permanent crops to cultivated area (%), rural population to total population (%), total economically active population in agriculture to total economically active population (%), human development index, national rainfall index (mm/year), value added to gross domestic product by agriculture (%), irrigation water requirement (mm/year), percentage of total cultivated area drained (%), difference between national rainfall index and irrigation water requirement (mm/year), area equipped for irrigation to cultivated area or land use policy index (%). These indices were analyzed for all 53 countries in the study area and the land use policy index was estimated by two different formulas. The results show that value of relative error is <20 %. In addition, an average index was calculated using various methods to assess countries' conditions for agricultural water management. Ability of irrigation and drainage systems was studied using other eight indices with more limited information. These indices are surface irrigation (%), sprinkler irrigation (%), localized irrigation (%), spate irrigation (%), agricultural water withdrawal (10 km3/year), conservation agriculture area as percentage of cultivated area (%), percentage of area equipped for irrigation salinized (%), and area waterlogged by irrigation (%). Finally, tendency of farmers to use irrigation systems for cultivated crops has been presented. The results show that Africa needs governments' policy to encourage farmers to use irrigation systems and raise cropping intensity for irrigated area.
Althomali, Talal A.
2018-01-01
Background: Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. Purpose: To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. Methods: The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Outcome Measures: Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. Results: The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Conclusion and Relevance: Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes. PMID:29872484
"First, know thyself": cognition and error in medicine.
Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo
2016-04-01
Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Wiegmann, D A; Shappell, S A
1999-12-01
The present study examined the role of human error and crew-resource management (CRM) failures in U.S. Naval aviation mishaps. All tactical jet (TACAIR) and rotary wing Class A flight mishaps between fiscal years 1990-1996 were reviewed. Results indicated that over 75% of both TACAIR and rotary wing mishaps were attributable, at least in part, to some form of human error of which 70% were associated with aircrew human factors. Of these aircrew-related mishaps, approximately 56% involved at least one CRM failure. These percentages are very similar to those observed prior to the implementation of aircrew coordination training (ACT) in the fleet, suggesting that the initial benefits of the program have not persisted and that CRM failures continue to plague Naval aviation. Closer examination of these CRM-related mishaps suggest that the type of flight operations (preflight, routine, emergency) do play a role in the etiology of CRM failures. A larger percentage of CRM failures occurred during non-routine or extremis flight situations when TACAIR mishaps were considered. In contrast, a larger percentage of rotary wing CRM mishaps involved failures that occurred during routine flight operations. These findings illustrate the complex etiology of CRM failures within Naval aviation and support the need for ACT programs tailored to the unique problems faced by specific communities in the fleet.
Conklin, Annalijn I; Ponce, Ninez A; Frank, John; Nandi, Arijit; Heymann, Jody
2016-01-01
To describe the relationship between minimum wage and overweight and obesity across countries at different levels of development. A cross-sectional analysis of 27 countries with data on the legislated minimum wage level linked to socio-demographic and anthropometry data of non-pregnant 190,892 adult women (24-49 y) from the Demographic and Health Survey. We used multilevel logistic regression models to condition on country- and individual-level potential confounders, and post-estimation of average marginal effects to calculate the adjusted prevalence difference. We found the association between minimum wage and overweight/obesity was independent of individual-level SES and confounders, and showed a reversed pattern by country development stage. The adjusted overweight/obesity prevalence difference in low-income countries was an average increase of about 0.1 percentage points (PD 0.075 [0.065, 0.084]), and an average decrease of 0.01 percentage points in middle-income countries (PD -0.014 [-0.019, -0.009]). The adjusted obesity prevalence difference in low-income countries was an average increase of 0.03 percentage points (PD 0.032 [0.021, 0.042]) and an average decrease of 0.03 percentage points in middle-income countries (PD -0.032 [-0.036, -0.027]). This is among the first studies to examine the potential impact of improved wages on an important precursor of non-communicable diseases globally. Among countries with a modest level of economic development, higher minimum wage was associated with lower levels of obesity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, AL; Bhagwat, MS; Buzurovic, I
Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less
Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon
2018-03-01
Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.
Conditions for the optical wireless links bit error ratio determination
NASA Astrophysics Data System (ADS)
Kvíčala, Radek
2017-11-01
To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.
Anisotropic scattering of discrete particle arrays.
Paul, Joseph S; Fu, Wai Chong; Dokos, Socrates; Box, Michael
2010-05-01
Far-field intensities of light scattered from a linear centro-symmetric array illuminated by a plane wave of incident light are estimated at a series of detector angles. The intensities are computed from the superposition of E-fields scattered by the individual array elements. An average scattering phase function is used to model the scattered fields of individual array elements. The nature of scattering from the array is investigated using an image (theta-phi plot) of the far-field intensities computed at a series of locations obtained by rotating the detector angle from 0 degrees to 360 degrees, corresponding to each angle of incidence in the interval [0 degrees 360 degrees]. The diffraction patterns observed from the theta-Phi plot are compared with those for isotropic scattering. In the absence of prior information on the array geometry, the intensities corresponding to theta-Phi pairs satisfying the Bragg condition are used to estimate the phase function. An algorithmic procedure is presented for this purpose and tested using synthetic data. The relative error between estimated and theoretical values of the phase function is shown to be determined by the mean spacing factor, the number of elements, and the far-field distance. An empirical relationship is presented to calculate the optimal far-field distance for a given specification of the percentage error.
Xu, Peng; Gordon, Mark S
2014-09-04
Anionic water clusters are generally considered to be extremely challenging to model using fragmentation approaches due to the diffuse nature of the excess electron distribution. The local correlation coupled cluster (CC) framework cluster-in-molecule (CIM) approach combined with the completely renormalized CR-CC(2,3) method [abbreviated CIM/CR-CC(2,3)] is shown to be a viable alternative for computing the vertical electron binding energies (VEBE). CIM/CR-CC(2,3) with the threshold parameter ζ set to 0.001, as a trade-off between accuracy and computational cost, demonstrates the reliability of predicting the VEBE, with an average percentage error of ∼15% compared to the full ab initio calculation at the same level of theory. The errors are predominantly from the electron correlation energy. The CIM/CR-CC(2,3) approach provides the ease of a black-box type calculation with few threshold parameters to manipulate. The cluster sizes that can be studied by high-level ab initio methods are significantly increased in comparison with full CC calculations. Therefore, the VEBE computed by the CIM/CR-CC(2,3) method can be used as benchmarks for testing model potential approaches in small-to-intermediate-sized water clusters.
Ultrasound biofeedback treatment for persisting childhood apraxia of speech.
Preston, Jonathan L; Brick, Nickole; Landi, Nicole
2013-11-01
The purpose of this study was to evaluate the efficacy of a treatment program that includes ultrasound biofeedback for children with persisting speech sound errors associated with childhood apraxia of speech (CAS). Six children ages 9-15 years participated in a multiple baseline experiment for 18 treatment sessions during which treatment focused on producing sequences involving lingual sounds. Children were cued to modify their tongue movements using visual feedback from real-time ultrasound images. Probe data were collected before, during, and after treatment to assess word-level accuracy for treated and untreated sound sequences. As participants reached preestablished performance criteria, new sequences were introduced into treatment. All participants met the performance criterion (80% accuracy for 2 consecutive sessions) on at least 2 treated sound sequences. Across the 6 participants, performance criterion was met for 23 of 31 treated sequences in an average of 5 sessions. Some participants showed no improvement in untreated sequences, whereas others showed generalization to untreated sequences that were phonetically similar to the treated sequences. Most gains were maintained 2 months after the end of treatment. The percentage of phonemes correct increased significantly from pretreatment to the 2-month follow-up. A treatment program including ultrasound biofeedback is a viable option for improving speech sound accuracy in children with persisting speech sound errors associated with CAS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Qian, Yun; Fast, Jerome D.
2011-07-13
Recent improvements to many global climate models include detailed, prognostic aerosol calculations intended to better reproduce the observed climate. However, the trace gas and aerosol fields are treated at the grid-cell scale with no attempt to account for sub-grid impacts on the aerosol fields. This paper begins to quantify the error introduced by the neglected sub-grid variability for the shortwave aerosol radiative forcing for a representative climate model grid spacing of 75 km. An analysis of the value added in downscaling aerosol fields is also presented to give context to the WRF-Chem simulations used for the sub-grid analysis. We foundmore » that 1) the impact of neglected sub-grid variability on the aerosol radiative forcing is strongest in regions of complex topography and complicated flow patterns, and 2) scale-induced differences in emissions contribute strongly to the impact of neglected sub-grid processes on the aerosol radiative forcing. The two of these effects together, when simulated at 75 km vs. 3 km in WRF-Chem, result in an average daytime mean bias of over 30% error in top-of-atmosphere shortwave aerosol radiative forcing for a large percentage of central Mexico during the MILAGRO field campaign.« less
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
A review on Black-Scholes model in pricing warrants in Bursa Malaysia
NASA Astrophysics Data System (ADS)
Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul
2017-01-01
This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.
Selected Oral Health Indicators in the United States, 2005-2008
... errors of the percentages were estimated using Taylor series linearization, to take into account the complex sampling design. The statistical significance of differences between estimates were ...
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
An Experimental Study of Small-Scale Variability of Raindrop Size Distribution
NASA Technical Reports Server (NTRS)
Tokay, Ali; Bashor, Paul G.
2010-01-01
An experimental study of small-scale variability of raindrop size distributions (DSDs) has been carried out at Wallops Island, Virginia. Three Joss-Waldvogel disdrometers were operated at a distance of 0.65, 1.05, and 1.70 km in a nearly straight line. The main purpose of the study was to examine the variability of DSDs and its integral parameters of liquid water content, rainfall, and reflectivity within a 2-km array: a typical size of Cartesian radar pixel. The composite DSD of rain events showed very good agreement among the disdrometers except where there were noticeable differences in midsize and large drops in a few events. For consideration of partial beam filling where the radar pixel was not completely covered by rain, a single disdrometer reported just over 10% more rainy minutes than the rainy minutes when all three disdrometers reported rainfall. Similarly two out of three disdrometers reported5%more rainy minutes than when all three were reporting rainfall. These percentages were based on a 1-min average, and were less for longer averaging periods. Considering only the minutes when all three disdrometers were reporting rainfall, just over one quarter of the observations showed an increase in the difference in rainfall with distance. This finding was based on a 15-min average and was even less for shorter averaging periods. The probability and cumulative distributions of a gamma-fitted DSD and integral rain parameters between the three disdrometers had a very good agreement and no major variability. This was mainly due to the high percentage of light stratiform rain and to the number of storms that traveled along the track of the disdrometers. At a fixed time step, however, both DSDs and integral rain parameters showed substantial variability. The standard deviation (SD) of rain rate was near 3 mm/h, while the SD of reflectivity exceeded 3 dBZ at the longest separation distance. These standard deviations were at 6-min average and were higher at shorter averaging periods. The correlations decreased with increasing separation distance. For rain rate, the correlations were higher than previous gauge-based studies. This was attributed to the differences in data processing and the difference in rainfall characteristics in different climate regions. It was also considered that the gauge sampling errors could be a factor. In this regard, gauge measurements were simulated employing existing disdrometer dataset. While a difference was noticed in cumulative distribution of rain occurrence between the simulated gauge and disdrometer observations, the correlations in simulated gauge measurements did not differ from the disdrometer measurements.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
3D-modelling of the thermal circumstances of a lake under artificial aeration
NASA Astrophysics Data System (ADS)
Tian, Xiaoqing; Pan, Huachen; Köngäs, Petrina; Horppila, Jukka
2017-12-01
A 3D-model was developed to study the effects of hypolimnetic aeration on the temperature profile of a thermally stratified Lake Vesijärvi (southern Finland). Aeration was conducted by pumping epilimnetic water through the thermocline to the hypolimnion without breaking the thermal stratification. The model used time transient equation based on Navier-Stokes equation. The model was fitted to the vertical temperature distribution and environmental parameters (wind, air temperature, and solar radiation) before the onset of aeration, and the model was used to predict the vertical temperature distribution 3 and 15 days after the onset of aeration (1 August and 22 August). The difference between the modelled and observed temperature was on average 0.6 °C. The average percentage model error was 4.0% on 1 August and 3.7% on 22 August. In the epilimnion, model accuracy depended on the difference between the observed temperature and boundary conditions. In the hypolimnion, the model residual decreased with increasing depth. On 1 August, the model predicted a homogenous temperature profile in the hypolimnion, while the observed temperature decreased moderately from the thermocline to the bottom. This was because the effect of sediment was not included in the model. On 22 August, the modelled and observed temperatures near the bottom were identical demonstrating that the heat transfer by the aerator masked the effect of sediment and that exclusion of sediment heat from the model does not cause considerable error unless very short-term effects of aeration are studied. In all, the model successfully described the effects of the aerator on the lake's temperature profile. The results confirmed the validity of the applied computational fluid dynamic in artificial aeration; based on the simulated results, the effect of aeration can be predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gierga, David P., E-mail: dgierga@partners.org; Harvard Medical School, Boston, Massachusetts; Turcotte, Julie C.
2012-12-01
Purpose: Breath-hold (BH) treatments can be used to reduce cardiac dose for patients with left-sided breast cancer and unfavorable cardiac anatomy. A surface imaging technique was developed for accurate patient setup and reproducible real-time BH positioning. Methods and Materials: Three-dimensional surface images were obtained for 20 patients. Surface imaging was used to correct the daily setup for each patient. Initial setup data were recorded for 443 fractions and were analyzed to assess random and systematic errors. Real time monitoring was used to verify surface placement during BH. The radiation beam was not turned on if the BH position difference wasmore » greater than 5 mm. Real-time surface data were analyzed for 2398 BHs and 363 treatment fractions. The mean and maximum differences were calculated. The percentage of BHs greater than tolerance was calculated. Results: The mean shifts for initial patient setup were 2.0 mm, 1.2 mm, and 0.3 mm in the vertical, longitudinal, and lateral directions, respectively. The mean 3-dimensional vector shift was 7.8 mm. Random and systematic errors were less than 4 mm. Real-time surface monitoring data indicated that 22% of the BHs were outside the 5-mm tolerance (range, 7%-41%), and there was a correlation with breast volume. The mean difference between the treated and reference BH positions was 2 mm in each direction. For out-of-tolerance BHs, the average difference in the BH position was 6.3 mm, and the average maximum difference was 8.8 mm. Conclusions: Daily real-time surface imaging ensures accurate and reproducible positioning for BH treatment of left-sided breast cancer patients with unfavorable cardiac anatomy.« less
Canovas, Carmen; van der Mooren, Marrie; Rosén, Robert; Piers, Patricia A; Wang, Li; Koch, Douglas D; Artal, Pablo
2015-05-01
To determine the impact of the equivalent refractive index (ERI) on intraocular lens (IOL) power prediction for eyes with previous myopic laser in situ keratomileusis (LASIK) using custom ray tracing. AMO B.V., Groningen, the Netherlands, and the Department of Ophthalmology, Baylor College of Medicine, Houston, Texas, USA. Retrospective data analysis. The ERI was calculated individually from the post-LASIK total corneal power. Two methods to account for the posterior corneal surface were tested; that is, calculation from pre-LASIK data or from post-LASIK data only. Four IOL power predictions were generated using a computer-based ray-tracing technique, including individual ERI results from both calculation methods, a mean ERI over the whole population, and the ERI for normal patients. For each patient, IOL power results calculated from the four predictions as well as those obtained with the Haigis-L were compared with the optimum IOL power calculated after cataract surgery. The study evaluated 25 patients. The mean and range of ERI values determined using post-LASIK data were similar to those determined from pre-LASIK data. Introducing individual or an average ERI in the ray-tracing IOL power calculation procedure resulted in mean IOL power errors that were not significantly different from zero. The ray-tracing procedure that includes an average ERI gave a greater percentage of eyes with an IOL power prediction error within ±0.5 diopter than the Haigis-L (84% versus 52%). For IOL power determination in post-LASIK patients, custom ray tracing including a modified ERI was an accurate procedure that exceeded the current standards for normal eyes. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Does the Market Value Racial and Ethnic Concordance in Physician–Patient Relationships?
Brown, Timothy T; Scheffler, Richard M; Tom, Sarah E; Schulman, Kevin A
2007-01-01
Objective To determine if the market-determined earnings per hour of physicians is sensitive to the degree of area-level racial/ethnic concordance (ALREC) in the local physician labor market. Data Sources 1998–1999 and 2000–2001 Community Tracking Study Physician Surveys and Household Surveys, 2000 U.S. Census, and the Area Resource File. Study Design Population-averaged regression models with area-level fixed effects were used to estimate the determinants of log earnings per hour for physicians in a two-period panel (N = 12,886). ALREC for a given racial/ethnic group is measured as the percentage of physicians who are of a given race/ethnicity less the percentage of the population who are of the corresponding race/ethnicity. Relevant control variables were included. Principal Findings Average earnings per hour for Hispanic and Asian physicians varies with the degree of ALREC that corresponds to a physician's race/ethnicity. Both Hispanic and Asian physicians earn more per hour in areas where corresponding ALREC is negative, other things equal. ALREC varies from negative to positive for all groups. ALREC for Hispanics is negative, on average, due to the small percentage of the physician workforce that is Hispanic. This results in an average 5.6 percent earnings-per-hour premium for Hispanic physicians. However, ALREC for Asians is positive, on average, due to the large percentage of the physician workforce that is Asian. This results in an average 4.0 percent earnings-per-hour discount for Asian physicians. No similar statistically significant results were found for black physicians. Conclusions The market-determined earnings per hour of Hispanic and Asian physicians are sensitive to the degree of ALREC in the local labor market. Larger sample sizes may be needed to find statistically significant results for black physicians. PMID:17362214
Improvable method for Halon 1301 concentration measurement based on infrared absorption
NASA Astrophysics Data System (ADS)
Hu, Yang; Lu, Song; Guan, Yu
2015-09-01
Halon 1301 has attached much interest because of its pervasive use as an effective fire suppressant agent in aircraft related fires, and the study of fire suppressant agent concentration measurement is especially of interest. In this work, a Halon 1301 concentration measurement method based on the Beer-Lambert law is developed. IR light is transmitted through mixed gas, and the light intensity with and without the agent present is measured. The intensity ratio is a function of the volume percentage of Halon 1301, and the voltage output of the detector is proportional to light intensity. As such, the relationship between the volume percentage and voltage ratio can be established. The concentration measurement system shows a relative error of the system less than ±2.50%, and a full scale error within 1.20%. This work also discusses the effect of temperature and relative humidity (RH) on the calibration. The experimental results of voltage ratio versus Halon 1301 volume percentage relationship show that the voltage ratio drops significantly as temperature rises from 25 to 100 °C, and it decreases as RH rises from 0% to 100%.
Imbery, Terence A; Diaz, Nicholas; Greenfield, Kristy; Janus, Charles; Best, Al M
2016-10-01
Preclinical fixed prosthodontics is taught by Department of Prosthodontics faculty members at Virginia Commonwealth University School of Dentistry; however, 86% of all clinical cases in academic year 2012 were staffed by faculty members from the Department of General Practice. The aims of this retrospective study were to quantify the quality of impressions, accuracy of laboratory work authorizations, and most common errors and to determine if there were differences between the rate of errors in cases supervised by the prosthodontists and the general dentists. A total of 346 Fixed Prosthodontic Laboratory Tracking Sheets for the 2012 academic year were reviewed. The results showed that, overall, 73% of submitted impressions were acceptable at initial evaluation, 16% had to be poured first and re-evaluated for quality prior to pindexing, 7% had multiple impressions submitted for transfer dies, and 4% were rejected for poor quality. There were higher acceptance rates for impressions and work authorizations for cases staffed by prosthodontists than by general dentists, but the differences were not statistically significant (p=0.0584 and p=0.0666, respectively). Regarding the work authorizations, 43% overall did not provide sufficient information or had technical errors that delayed prosthesis fabrication. The most common errors were incorrect mountings, absence of solid casts, inadequate description of margins for porcelain fused to metal crowns, inaccurate die trimming, and margin marking. The percentages of errors in cases supervised by general dentists and prosthodontists were similar for 17 of the 18 types of errors identified; only for margin description was the percentage of errors statistically significantly higher for general dentist-supervised than prosthodontist-supervised cases. These results highlighted the ongoing need for faculty development and calibration to ensure students receive the highest quality education from all faculty members teaching fixed prosthodontics.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Scolletta, Sabino; Franchi, Federico; Romagnoli, Stefano; Carlà, Rossella; Donati, Abele; Fabbri, Lea P; Forfori, Francesco; Alonso-Iñigo, José M; Laviola, Silvia; Mangani, Valerio; Maj, Giulia; Martinelli, Giampaolo; Mirabella, Lucia; Morelli, Andrea; Persona, Paolo; Payen, Didier
2016-07-01
Echocardiography and pulse contour methods allow, respectively, noninvasive and less invasive cardiac output estimation. The aim of the present study was to compare Doppler echocardiography with the pulse contour method MostCare for cardiac output estimation in a large and nonselected critically ill population. A prospective multicenter observational comparison study. The study was conducted in 15 European medicosurgical ICUs. We assessed cardiac output in 400 patients in whom an echocardiographic evaluation was performed as a routine need or for cardiocirculatory assessment. None. One echocardiographic cardiac output measurement was compared with the corresponding MostCare cardiac output value per patient, considering different ICU admission categories and clinical conditions. For statistical analysis, we used Bland-Altman and linear regression analyses. To assess heterogeneity in results of individual centers, Cochran Q, and the I statistics were applied. A total of 400 paired echocardiographic cardiac output and MostCare cardiac output measures were compared. MostCare cardiac output values ranged from 1.95 to 9.90 L/min, and echocardiographic cardiac output ranged from 1.82 to 9.75 L/min. A significant correlation was found between echocardiographic cardiac output and MostCare cardiac output (r = 0.85; p < 0.0001). Among the different ICUs, the mean bias between echocardiographic cardiac output and MostCare cardiac output ranged from -0.40 to 0.45 L/min, and the percentage error ranged from 13.2% to 47.2%. Overall, the mean bias was -0.03 L/min, with 95% limits of agreement of -1.54 to 1.47 L/min and a relative percentage error of 30.1%. The percentage error was 24% in the sepsis category, 26% in the trauma category, 30% in the surgical category, and 33% in the medical admission category. The final overall percentage error was 27.3% with a 95% CI of 22.2-32.4%. Our results suggest that MostCare could be an alternative to echocardiography to assess cardiac output in ICU patients with a large spectrum of clinical conditions.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
Githinji, Sophie; Kigen, Samwel; Memusi, Dorothy; Nyandigisi, Andrew; Mbithi, Agneta M.; Wamari, Andrew; Muturi, Alex N.; Jagoe, George; Barrington, Jim; Snow, Robert W.; Zurovac, Dejan
2013-01-01
Background Health facility stock-outs of life saving malaria medicines are common across Africa. Innovative ways of addressing this problem are urgently required. We evaluated whether SMS based reporting of stocks of artemether-lumefantrine (AL) and rapid diagnostic tests (RDT) can result in reduction of stock-outs at peripheral facilities in Kenya. Methods/Findings All 87 public health facilities in five Kenyan districts were included in a 26 week project. Weekly facility stock counts of four AL packs and RDTs were sent via structured incentivized SMS communication process from health workers’ personal mobile phones to a web-based system accessed by district managers. The mean health facility response rate was 97% with a mean formatting error rate of 3%. Accuracy of stock count reports was 79% while accuracy of stock-out reports was 93%. District managers accessed the system 1,037 times at an average of eight times per week. The system was accessed in 82% of the study weeks. Comparing weeks 1 and 26, stock-out of one or more AL packs declined by 38 percentage-points. Total AL stock-out declined by 5 percentage-points and was eliminated by the end of the project. Stock-out declines of individual AL packs ranged from 14 to 32 percentage-points while decline in RDT stock-outs was 24 percentage-points. District managers responded to 44% of AL and 73% of RDT stock-out signals by redistributing commodities between facilities. In comparison with national trends, stock-out declines in study areas were greater, sharper and more sustained. Conclusions Use of simple SMS technology ensured high reporting rates of reasonably accurate, real-time facility stock data that were used by district managers to undertake corrective actions to reduce stock-outs. Future work on stock monitoring via SMS should focus on assessing response rates without use of incentives and demonstrating effectiveness of such interventions on a larger scale. PMID:23349786
Penm, Jonathan; Chaar, Betty; Moles, Rebekah
2015-06-01
Clinical pharmacy services have been associated with decreased mortality rates, length of stay, medication errors, adverse drug reactions and total cost of care. Such services have recently been introduced to the Western Pacific Region (WPR), particularly in Asia. A survey to measure clinical pharmacy services that influence prescribing has been validated in the WPR and can be used to explore the implementation of such services. To explore the implementation of clinical pharmacy services that influence prescribing in the WPR and the barriers and facilitators involved in their implementation. Hospital pharmacies in the WPR. Hospital pharmacy directors in the WPR were emailed a link to the validated survey. Surveys were available in English, Japanese, Chinese, Vietnamese, Lao, Khmer, French and Mongolian. (1) Percentage of hospitals offering clinical pharmacy services. (2) Percentage of in-patients receiving a medication history, review or discharge counselling by a pharmacist. In total, 726 responses were received from 31 countries and nations. Nearly all hospitals, 90.6 % (658/726), stated they provided clinical pharmacy services. On average 28 % of their clinical pharmacists attended medical rounds regularly. The median percentage of inpatients receiving a medication history and discharge counselling by a pharmacist was 40 and 30 % respectively. Higher internal facilitator factor scores significantly increased the likelihood of offering clinical services and having pharmacists attend medical rounds regularly. Internal facilitators included individual pharmacist traits and pharmacy departmental structure/resources. Higher environmental facilitator factor scores and having a higher percentage of pharmacists attend medical rounds regularly significantly increased the likelihood of inpatients receiving a medication history, a medication review and discharge counselling by a pharmacist. Environment facilitators included government support, patient and physician expectations. A large proportion of hospitals in the WPR have implemented clinical pharmacy services. Although internal facilitators were shown to be important for initiating such services, the addition of environmental facilitators and ward round participation by pharmacists allowed clinical services to be integrated throughout the hospitals.
Dexter, Franklin; Jarvie, Craig; Epstein, Richard H
2017-11-01
Percentage utilization of operating room (OR) time is not an appropriate endpoint for planning additional OR time for surgeons with high caseloads, and cannot be measured accurately for surgeons with low caseloads. Nonetheless, many OR directors claim that their hospitals make decisions based on individual surgeons' OR utilizations. This incongruity could be explained by the OR managers considering the earlier mathematical studies, performed using data from a few large teaching hospitals, as irrelevant to their hospitals. The important mathematical parameter for the prior observations is the percentage of surgeon lists of elective cases that include 1 or 2 cases; "list" meaning a combination of surgeon, hospital, and date. We measure the incidence among many hospitals. Observational cohort study. 117 hospitals in Iowa from July 2013 through September 2015. Surgeons with same identifier among hospitals. Surgeon lists of cases including at least one outpatient surgical case, so that Relative Value Units (RVU's) could be measured. Averaging among hospitals in Iowa, more than half of the surgeons' lists included 1 or 2 cases (77%; P<0.00001 vs. 50%). Approximately half had 1 case (54%; P=0.0012 vs. 50%). These percentages exceeded 50% even though nearly all the surgeons operated at just 1 hospital on days with at least 1 case (97.74%; P<0.00001 vs. 50%). The cases were not of long durations; among the 82,928 lists with 1 case, the median was 6 intraoperative RVUs (e.g., adult inguinal herniorrhaphy). Accurate confidence intervals for raw or adjusted utilizations are so wide for individual surgeons that decisions based on utilization are equivalent to decisions based on random error. The implication of the current study is generalizability of that finding from the largest teaching hospital in the state to the other hospitals in the state. Copyright © 2017 Elsevier Inc. All rights reserved.
Intra-rater reliability of hallux flexor strength measures using the Nintendo Wii Balance Board.
Quek, June; Treleaven, Julia; Brauer, Sandra G; O'Leary, Shaun; Clark, Ross A
2015-01-01
The purpose of this study was to investigate the intra-rater reliability of a new method in combination with the Nintendo Wii Balance Board (NWBB) to measure the strength of hallux flexor muscle. Thirty healthy individuals (age: 34.9 ± 12.9 years, height: 170.4 ± 10.5 cm, weight: 69.3 ± 15.3 kg, female = 15) participated. Repeated testing was completed within 7 days. Participants performed strength testing in sitting using a wooden platform in combination with the NWBB. This new method was set up to selectively recruit an intrinsic muscle of the foot, specifically the flexor hallucis brevis muscle. Statistical analysis was performed using intra-class coefficients and ordinary least product analysis. To estimate measurement error, standard error of measurement (SEM), minimal detectable change (MDC) and percentage error were calculated. Results indicate excellent intra-rater reliability (ICC = 0.982, CI = 0.96-0.99) with an absence of systematic bias. SEM, MDC and percentage error value were 0.5, 1.4 and 12 % respectively. This study demonstrates that a new method in combination with the NWBB application is reliable to measure hallux flexor strength and has potential to be used for future research and clinical application.
Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas
NASA Astrophysics Data System (ADS)
Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.
2017-12-01
Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.
Conklin, Annalijn I.; Ponce, Ninez A.; Frank, John; Nandi, Arijit; Heymann, Jody
2016-01-01
Objectives To describe the relationship between minimum wage and overweight and obesity across countries at different levels of development. Methods A cross-sectional analysis of 27 countries with data on the legislated minimum wage level linked to socio-demographic and anthropometry data of non-pregnant 190,892 adult women (24–49 y) from the Demographic and Health Survey. We used multilevel logistic regression models to condition on country- and individual-level potential confounders, and post-estimation of average marginal effects to calculate the adjusted prevalence difference. Results We found the association between minimum wage and overweight/obesity was independent of individual-level SES and confounders, and showed a reversed pattern by country development stage. The adjusted overweight/obesity prevalence difference in low-income countries was an average increase of about 0.1 percentage points (PD 0.075 [0.065, 0.084]), and an average decrease of 0.01 percentage points in middle-income countries (PD -0.014 [-0.019, -0.009]). The adjusted obesity prevalence difference in low-income countries was an average increase of 0.03 percentage points (PD 0.032 [0.021, 0.042]) and an average decrease of 0.03 percentage points in middle-income countries (PD -0.032 [-0.036, -0.027]). Conclusion This is among the first studies to examine the potential impact of improved wages on an important precursor of non-communicable diseases globally. Among countries with a modest level of economic development, higher minimum wage was associated with lower levels of obesity. PMID:26963247
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Kerns, J; Nute, J
Purpose: To evaluate three commercial metal artifact reduction methods (MAR) in the context of radiation therapy treatment planning. Methods: Three MAR strategies were evaluated: Philips O-MAR, monochromatic imaging using Gemstone Spectral Imaging (GSI) dual energy CT, and monochromatic imaging with metal artifact reduction software (GSIMARs). The Gammex RMI 467 tissue characterization phantom with several metal rods and two anthropomorphic phantoms (pelvic phantom with hip prosthesis and head phantom with dental fillings), were scanned with and without (baseline) metals. Each MAR method was evaluated based on CT number accuracy, metal size accuracy, and reduction in the severity of streak artifacts. CTmore » number difference maps between the baseline and metal scan images were calculated, and the severity of streak artifacts was quantified using the percentage of pixels with >40 HU error (“bad pixels”). Results: Philips O-MAR generally reduced HU errors in the RMI phantom. However, increased errors and induced artifacts were observed for lung materials. GSI monochromatic 70keV images generally showed similar HU errors as 120kVp imaging, while 140keV images reduced errors. GSI-MARs systematically reduced errors compared to GSI monochromatic imaging. All imaging techniques preserved the diameter of a stainless steel rod to within ±1.6mm (2 pixels). For the hip prosthesis, O-MAR reduced the average % bad pixels from 47% to 32%. For GSI 140keV imaging, the percent of bad pixels was reduced from 37% to 29% compared to 120kVp imaging, while GSI-MARs further reduced it to 12%. For the head phantom, none of the MAR methods were particularly successful. Conclusion: The three MAR methods all improve CT images for treatment planning to some degree, but none of them are globally effective for all conditions. The MAR methods were successful for large metal implants in a homogeneous environment (hip prosthesis) but were not successful for the more complicated case of dental artifacts.« less
NASA Astrophysics Data System (ADS)
Yang, Yanqiu; Yu, Lin; Zhang, Yixin
2017-04-01
A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.
Force-Time Entropy of Isometric Impulse.
Hsieh, Tsung-Yu; Newell, Karl M
2016-01-01
The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.
NASA Astrophysics Data System (ADS)
Li, X.; Zhang, C.; Li, W.
2017-12-01
Long-term spatiotemporal analysis and modeling of aerosol optical depth (AOD) distribution is of paramount importance to study radiative forcing, climate change, and human health. This study is focused on the trends and variations of AOD over six stations located in United States and China during 2003 to 2015, using satellite-retrieved Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 retrievals and ground measurements derived from Aerosol Robotic NETwork (AERONET). An autoregressive integrated moving average (ARIMA) model is applied to simulate and predict AOD values. The R2, adjusted R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Bayesian Information Criterion (BIC) are used as indices to select the best fitted model. Results show that there is a persistent decreasing trend in AOD for both MODIS data and AERONET data over three stations. Monthly and seasonal AOD variations reveal consistent aerosol patterns over stations along mid-latitudes. Regional differences impacted by climatology and land cover types are observed for the selected stations. Statistical validation of time series models indicates that the non-seasonal ARIMA model performs better for AERONET AOD data than for MODIS AOD data over most stations, suggesting the method works better for data with higher quality. By contrast, the seasonal ARIMA model reproduces the seasonal variations of MODIS AOD data much more precisely. Overall, the reasonably predicted results indicate the applicability and feasibility of the stochastic ARIMA modeling technique to forecast future and missing AOD values.
Tsuchida, Satoshi; Thome, Kurtis
2017-01-01
Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329
Measures of Muscular Strength in U.S. Children and Adolescents, 2012
... errors of the percentages were estimated using Taylor series linearization, a method that incorporates the sample weights and sample design. Differences between groups were evaluated using a t ...
O'Neill, Liam; Dexter, Franklin; Park, Sae-Hwan; Epstein, Richard H
2017-09-01
Most surgical discharges (54%) at the average hospital are for procedures performed no more often than once per month at that hospital. We hypothesized that such uncommon procedures would be associated with an even greater percentage of the total cost of performing all surgical procedures at that hospital. Observational study. State of Texas hospital discharge abstract data: 4th quarter of 2015 and 1st quarter of 2016. Inpatients discharged with a major therapeutic ("operative") procedure. For each of N=343 hospitals, counts of discharges, sums of lengths of stay (LOS), sums of diagnosis related group (DRG) case-mix weights, and sums of charges were obtained for each procedure or combination of procedures, classified by International Classification of Diseases version 10 Procedure Coding System (ICD-10-PCS). Each discharge was classified into 2 categories, uncommon versus not, defined as a procedure performed at most once per month versus those performed more often than once per month. Major procedures performed at most once per month per hospital accounted for an average among hospitals of 68% of the total inpatient costs associated with all major therapeutic procedures. On average, the percentage of total costs associated with uncommon procedures was 26% greater than expected based on their share of total discharges (P<0.00001). Average percentage differences were insensitive to the endpoint, with similar results for the percentage of patient days and percentage of DRG case-mix weights. Approximately 2/3rd (mean 68%) of inpatient costs among surgical patients can be attributed to procedures performed at most once per month per hospital. The finding that such uncommon procedures account for a large percentage of costs is important because methods of cost accounting by procedure are generally unsuitable for them. Copyright © 2017 Elsevier Inc. All rights reserved.
Predictive model for disinfection by-product in Alexandria drinking water, northern west of Egypt.
Abdullah, Ali M; Hussona, Salah El-dien
2013-10-01
Chlorine has been utilized in the early stages of water treatment processes as disinfectant. Disinfection for drinking water reduces the risk of pathogenic infection but may pose a chemical threat to human health due to disinfection residues and their by-products (DBP) when the organic and inorganic precursors are present in water. In the last two decades, many modeling attempts have been made to predict the occurrence of DBP in drinking water. Models have been developed based on data generated in laboratory-scale and field-scale investigations. The objective of this paper is to develop a predictive model for DBP formation in the Alexandria governorate located at the northern west of Egypt based on field-scale investigations as well as laboratory-controlled experimentations. The present study showed that the correlation coefficient between trihalomethanes (THM) predicted and THM measured was R (2)=0.88 and the minimum deviation percentage between THM predicted and THM measured was 0.8 %, the maximum deviation percentage was 89.3 %, and the average deviation was 17.8 %, while the correlation coefficient between dichloroacetic acid (DCAA) predicted and DCAA measured was R (2)=0.98 and the minimum deviation percentage between DCAA predicted and DCAA measured was 1.3 %, the maximum deviation percentage was 47.2 %, and the average deviation was 16.6 %. In addition, the correlation coefficient between trichloroacetic acid (TCAA) predicted and TCAA measured was R (2)=0.98 and the minimum deviation percentage between TCAA predicted and TCAA measured was 4.9 %, the maximum deviation percentage was 43.0 %, and the average deviation was 16.0 %.
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
The calculation of average error probability in a digital fibre optical communication system
NASA Astrophysics Data System (ADS)
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
[Impact of a software application to improve medication reconciliation at hospital discharge].
Corral Baena, S; Garabito Sánchez, M J; Ruíz Rómero, M V; Vergara Díaz, M A; Martín Chacón, E R; Fernández Moyano, A
2014-01-01
To assess the impact of a software application to improve the quality of information concerning current patient medications and changes on the discharge report after hospitalization. To analyze the incidence of errors and to classify them. Quasi-experimental pre / post study with non-equivalent control group study. Medical patients at hospital discharge. implementation of a software application. Percentage of reconciled patient medication on discharge, and percentage of patients with more than one unjustified discrepancy. A total of 349 patients were assessed; 199 (pre-intervention phase) and 150 (post-intervention phase). Before the implementation of the application in 157 patients (78.8%) medication reconciliation had been completed; finding reconciliation errors in 99 (63.0%). The most frequent type of error, 339 (78.5%), was a missing dose or administration frequency information. After implementation, all the patient prescriptions were reconciled when the software was used. The percentage of patients with unjustified discrepancies decreased from 63.0% to 11.8% with the use of the application (p<.001). The main type of discrepancy found on using the application was confusing prescription, due to the fact that the professionals were not used to using the new tool. The use of a software application has been shown to improve the quality of the information on patient treatment on the hospital discharge report, but it is still necessary to continue development as a strategy for improving medication reconciliation. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.
Venugopal, Paramaguru; Kasimani, Ramesh; Chinnasamy, Suresh
2018-06-21
The transportation demand in India is increasing tremendously, which arouses the energy consumption by 4.1 to 6.1% increases each year from 2010 to 2050. In addition, the private vehicle ownership keeps on increasing almost 10% per year during the last decade and reaches 213 million tons of oil consumption in 2016. Thus, this makes India the third largest importer of crude oil in the world. Because of this problem, there is a need of promoting the alternative fuels (biodiesel) which are from different feedstocks for the transportation. This alternative fuel has better emission characteristics compared to neat diesel, hence the biodiesel can be used as direct alternative for diesel and it can also be blended with diesel to get better performance. However, the effect of compression ratio, injection timing, injection pressure, composition-blend ratio and air-fuel ratio, and the shape of the cylinder may affect the performance and emission characteristics of the diesel engine. This article deals with the effect of compression ratio in the performance of the engine while using Honne oil diesel blend and also to find out the optimum compression ratio. So the experimentations are conducted using Honne oil diesel blend-fueled CI engine at variable load conditions and at constant speed operations. In order to find out the optimum compression ratio, experiments are carried out on a single-cylinder, four-stroke variable compression ratio diesel engine, and it is found that 18:1 compression ratio gives better performance than the lower compression ratios. Engine performance tests were carried out at different compression ratio values. Using experimental data, regression model was developed and the values were predicted using response surface methodology. Then the predicted values were validated with the experimental results and a maximum error percentage of 6.057 with an average percentage of error as 3.57 were obtained. The optimum numeric factors for different responses were also selected using RSM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Bai, W
Purpose: Because of statistical noise in Monte Carlo dose calculations, effective point doses may not be accurate. Volume spheres are useful for evaluating dose in Monte Carlo plans, which have an inherent statistical uncertainty.We use a user-defined sphere volume instead of a point, take sphere sampling around effective point make the dose statistics to decrease the stochastic errors. Methods: Direct dose measurements were made using a 0.125cc Semiflex ion chamber (IC) 31010 isocentrically placed in the center of a homogeneous Cylindric sliced RW3 phantom (PTW, Germany).In the scanned CT phantom series the sensitive volume length of the IC (6.5mm) weremore » delineated and defined the isocenter as the simulation effective points. All beams were simulated in Monaco in accordance to the measured model. In our simulation using 2mm voxels calculation grid spacing and choose calculate dose to medium and request the relative standard deviation ≤0.5%. Taking three different assigned IC over densities (air electron density(ED) as 0.01g/cm3 default CT scanned ED and Esophageal lumen ED 0.21g/cm3) were tested at different sampling sphere radius (2.5, 2, 1.5 and 1 mm) statistics dose were compared with the measured does. Results: The results show that in the Monaco TPS for the IC using Esophageal lumen ED 0.21g/cm3 and sampling sphere radius 1.5mm the statistical value is the best accordance with the measured value, the absolute average percentage deviation is 0.49%. And when the IC using air electron density(ED) as 0.01g/cm3 and default CT scanned EDthe recommented statistical sampling sphere radius is 2.5mm, the percentage deviation are 0.61% and 0.70%, respectivly. Conclusion: In Monaco treatment planning system for the ionization chamber 31010 recommend air cavity using ED 0.21g/cm3 and sampling 1.5mm sphere volume instead of a point dose to decrease the stochastic errors. Funding Support No.C201505006.« less
Trends in Self-Reported Sleep Duration among US Adults from 1985 to 2012
Ford, Earl S.; Cunningham, Timothy J.; Croft, Janet B.
2015-01-01
Study Objective: The trend in sleep duration in the United States population remains uncertain. Our objective was to examine changes in sleep duration from 1985 to 2012 among US adults. Design: Trend analysis. Setting: Civilian noninstitutional population of the United States. Participants: 324,242 US adults aged ≥ 18 y of the National Health Interview Survey (1985, 1990, and 2004–2012). Measurements and Results: Sleep duration was defined on the basis of the question “On average, how many hours of sleep do you get in a 24-h period?” The age-adjusted mean sleep duration was 7.40 h (standard error [SE] 0.01) in 1985, 7.29 h (SE 0.01) in 1990, 7.18 h (SE 0.01) in 2004, and 7.18 h (SE 0.01) in 2012 (P 2012 versus 1985 < 0.001; P trend 2004–2012 = 0.982). The age-adjusted percentage of adults sleeping ≤ 6 h was 22.3% (SE 0.3) in 1985, 24.4% (SE 0.3) in 1990, 28.6% (SE 0.3) in 2004, and 29.2% (SE 0.3) in 2012 (P 2012 versus 1985 < 0.001; P trend 2004–2012 = 0.050). In 2012, approximately 70.1 million US adults reported sleeping ≤ 6 h. Conclusions: Since 1985, age-adjusted mean sleep duration has decreased slightly and the percentage of adults sleeping ≤ 6 h increased by 31%. Since 2004, however, mean sleep duration and the percentage of adults sleeping ≤ 6 h have changed little. Citation: Ford ES, Cunningham TJ, Croft JB. Trends in self-reported sleep duration among US adults from 1985 to 2012. SLEEP 2015;38(5):829–832. PMID:25669182
Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong
2015-09-01
A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.
Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements
NASA Technical Reports Server (NTRS)
Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.
2012-01-01
We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Izquierdo, M; González-Badillo, J J; Häkkinen, K; Ibáñez, J; Kraemer, W J; Altadill, A; Eslava, J; Gorostiaga, E M
2006-09-01
The purpose of this study was to examine the effect of different loads on repetition speed during single sets of repetitions to failure in bench press and parallel squat. Thirty-six physical active men performed 1-repetition maximum in a bench press (1 RM (BP)) and half squat position (1 RM (HS)), and performed maximal power-output continuous repetition sets randomly every 10 days until failure with a submaximal load (60 %, 65 %, 70 %, and 75 % of 1RM, respectively) during bench press and parallel squat. Average velocity of each repetition was recorded by linking a rotary encoder to the end part of the bar. The values of 1 RM (BP) and 1 RM (HS) were 91 +/- 17 and 200 +/- 20 kg, respectively. The number of repetitions performed for a given percentage of 1RM was significantly higher (p < 0.001) in half squat than in bench press performance. Average repetition velocity decreased at a greater rate in bench press than in parallel squat. The significant reductions observed in the average repetition velocity (expressed as a percentage of the average velocity achieved during the initial repetition) were observed at higher percentage of the total number of repetitions performed in parallel squat (48 - 69 %) than in bench press (34 - 40 %) actions. The major finding in this study was that, for a given muscle action (bench press or parallel squat), the pattern of reduction in the relative average velocity achieved during each repetition and the relative number of repetitions performed was the same for all percentages of 1RM tested. However, relative average velocity decreased at a greater rate in bench press than in parallel squat performance. This would indicate that in bench press the significant reductions observed in the average repetition velocity occurred when the number of repetitions was over one third (34 %) of the total number of repetitions performed, whereas in parallel squat it was nearly one half (48 %). Conceptually, this would indicate that for a given exercise (bench press or squat) and percentage of maximal dynamic strength (1RM), the pattern of velocity decrease can be predicted over a set of repetitions, so that a minimum repetition threshold to ensure maximal speed performance is determined.
Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris
2014-01-01
Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.
Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John
2018-03-01
To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.
Noel, Camille E; Gutti, Veerarajesh; Bosch, Walter; Mutic, Sasa; Ford, Eric; Terezakis, Stephanie; Santanam, Lakshmi
2014-04-01
To quantify the potential impact of the Integrating the Healthcare Enterprise-Radiation Oncology Quality Assurance with Plan Veto (QAPV) on patient safety of external beam radiation therapy (RT) operations. An institutional database of events (errors and near-misses) was used to evaluate the ability of QAPV to prevent clinically observed events. We analyzed reported events that were related to Digital Imaging and Communications in Medicine RT plan parameter inconsistencies between the intended treatment (on the treatment planning system) and the delivered treatment (on the treatment machine). Critical Digital Imaging and Communications in Medicine RT plan parameters were identified. Each event was scored for importance using the Failure Mode and Effects Analysis methodology. Potential error occurrence (frequency) was derived according to the collected event data, along with the potential event severity, and the probability of detection with and without the theoretical implementation of the QAPV plan comparison check. Failure Mode and Effects Analysis Risk Priority Numbers (RPNs) with and without QAPV were compared to quantify the potential benefit of clinical implementation of QAPV. The implementation of QAPV could reduce the RPN values for 15 of 22 (71%) of evaluated parameters, with an overall average reduction in RPN of 68 (range, 0-216). For the 6 high-risk parameters (>200), the average reduction in RPN value was 163 (range, 108-216). The RPN value reduction for the intermediate-risk (200 > RPN > 100) parameters was (0-140). With QAPV, the largest RPN value for "Beam Meterset" was reduced from 324 to 108. The maximum reduction in RPN value was for Beam Meterset (216, 66.7%), whereas the maximum percentage reduction was for Cumulative Meterset Weight (80, 88.9%). This analysis quantifies the value of the Integrating the Healthcare Enterprise-Radiation Oncology QAPV implementation in clinical workflow. We demonstrate that although QAPV does not provide a comprehensive solution for error prevention in RT, it can have a significant impact on a subset of the most severe clinically observed events. Copyright © 2014 Elsevier Inc. All rights reserved.
Improving laboratory data entry quality using Six Sigma.
Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks
2013-01-01
The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.
MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerns, J; Followill, D; Howell, R
Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less
Worldwide Survey of Alcohol and Nonmedical Drug Use among Military Personnel: 1982,
1983-01-01
cell . The first number is an estimate of the percentage of the population with the characteristics that define the cell . The second number, in...multiplying 1.96 times the standard error for that cell . (Obviously, for very small or very large estimates, the respective smallest or largest value in...that the cell proportions estimate the true population value more precisely, and larger standard errors indicate that the true population value is
Cloud-free resolution element statistics program
NASA Technical Reports Server (NTRS)
Liley, B.; Martin, C. D.
1971-01-01
Computer program computes number of cloud-free elements in field-of-view and percentage of total field-of-view occupied by clouds. Human error is eliminated by using visual estimation to compute cloud statistics from aerial photographs.
NASA Astrophysics Data System (ADS)
Purbowati, E.; Lestari, C. M. S.; Ma'ruf, M. J.; Sutaryo, S.
2018-02-01
The objective of this study was to evaluate the breed, age, sex, slaughter weight, carcass weight, and carcass percentage of cattle which was slaughtered at Slaughter House in Salatiga, Central Java. The materials used in the study were 156 head of catlle. The sampling used was incidental sampling to identify the breed, age, sex, slaughter weight and carcass weight. The data gathered were analyzed descriptively. The result showed that the sex of all the cattle slaughtered were male. The breeds of the cattle were Frisian Holstein Grade (70.51%), Simmental (15.38+3.21), Simmental-Ongole Grade (5.13%), and Limousine-Ongole Grade (5.77%). The average age of the cattle were 2.34 year old, with an average of slaughter weight of 529.34 kg, while the averages of carcass weight were 277.61 kg. The average of carcass percentage was as high as 52.56%. The conclusion of the study was the highest number of breeds of the cattle slaughtered at Slaughter House in Salatiga were young Frisian Holstein, the body weights were included in large frame score, and the carcass percentage were moderate.
NASA Astrophysics Data System (ADS)
Mirbaha, Babak; Saffarzadeh, Mahmoud; AmirHossein Beheshty, Seyed; Aniran, MirMoosa; Yazdani, Mirbahador; Shirini, Bahram
2017-10-01
Analysis of vehicle speed with different weather condition and traffic characteristics is very effective in traffic planning. Since the weather condition and traffic characteristics vary every day, the prediction of average speed can be useful in traffic management plans. In this study, traffic and weather data for a two-lane highway located in Northwest of Iran were selected for analysis. After merging traffic and weather data, the linear regression model was calibrated for speed prediction using STATA12.1 Statistical and Data Analysis software. Variables like vehicle flow, percentage of heavy vehicles, vehicle flow in opposing lane, percentage of heavy vehicles in opposing lane, rainfall (mm), snowfall and maximum daily wind speed more than 13m/s were found to be significant variables in the model. Results showed that variables of vehicle flow and heavy vehicle percent acquired the positive coefficient that shows, by increasing these variables the average vehicle speed in every weather condition will also increase. Vehicle flow in opposing lane, percentage of heavy vehicle in opposing lane, rainfall amount (mm), snowfall and maximum daily wind speed more than 13m/s acquired the negative coefficient that shows by increasing these variables, the average vehicle speed will decrease.
A new age-based formula for estimating weight of Korean children.
Park, Jungho; Kwak, Young Ho; Kim, Do Kyun; Jung, Jae Yun; Lee, Jin Hee; Jang, Hye Young; Kim, Hahn Bom; Hong, Ki Jeong
2012-09-01
The objective of this study was to develop and validate a new age-based formula for estimating body weights of Korean children. We obtained body weight and age data from a survey conducted in 2005 by the Korean Pediatric Society that was performed to establish normative values for Korean children. Children aged 0-14 were enrolled, and they were divided into three groups according to age: infants (<12 months), preschool-aged (1-4 years) and school-aged children (5-14 years). Seventy-five percent of all subjects were randomly selected to make a derivation set. Regression analysis was performed in order to produce equations that predict the weight from the age for each group. The linear equations derived from this analysis were simplified to create a weight estimating formula for Korean children. This formula was then validated using the remaining 25% of the study subjects with mean percentage error and absolute error. To determine whether a new formula accurately predicts actual weights of Korean children, we also compared this new formula to other weight estimation methods (APLS, Shann formula, Leffler formula, Nelson formula and Broselow tape). A total of 124,095 children's data were enrolled, and 19,854 (16.0%), 40,612 (32.7%) and 63,629 (51.3%) were classified as infants, preschool-aged and school-aged groups, respectively. Three equations, (age in months+9)/2, 2×(age in years)+9 and 4×(age in years)-1 were derived for infants, pre-school and school-aged groups, respectively. When these equations were applied to the validation set, the actual average weight of those children was 0.4kg heavier than our estimated weight (95% CI=0.37-0.43, p<0.001). The mean percentage error of our model (+0.9%) was lower than APLS (-11.5%), Shann formula (-8.6%), Leffler formula (-1.7%), Nelson formula (-10.0%), Best Guess formula (+5.0%) and Broselow tape (-4.8%) for all age groups. We developed and validated a simple formula to estimate body weight from the age of Korean children and found that this new formula was more accurate than other weight estimating methods. However, care should be taken when applying this formula to older children because of a large standard deviation of estimated weight. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui
2010-10-01
A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.
Use of failure mode effect analysis (FMEA) to improve medication management process.
Jain, Khushboo
2017-03-13
Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.
Effects of orthokeratology on the progression of low to moderate myopia in Chinese children.
He, Mengmei; Du, Yaru; Liu, Qingyu; Ren, Chengda; Liu, Junling; Wang, Qianyi; Li, Li; Yu, Jing
2016-07-27
To investigate the effectiveness of orthokeratology (ortho-k) in reducing the development of myopia in Chinese children with low to moderate myopia. This was a retrospective study. In the ortho-k group, there were141 subjects, and the average age was (9.43 ± 1.10) years. The average spherical equivalent refractive error (SER) was (-2.74 ± 1.15)D, with examinations performed 1, 7, 30, and 90 days and 12 months after the patients started wearing ortho-k lenses. In the control group, there were 130 subjects, and the average age was (9.37 ± 1.00) years. The average SER was (-2.88 ± 1.39)D, with examinations performed every 6 months. Axial elongation, which is an important parameter reflecting the progression of myopia, was measured at baseline from the same IOLMaster each time by the same masked examiner and was compared between the groups after 1 year. The subjects were divided into two sub-groups according to age to further study the development of myopia at different ages. An unpaired t-test, paired t-test, Chi-square test and Spearman test were performed to analyze the data. After 1 year, the average axial elongation was (0.27 ± 0.17) mm in the ortho-k lens group and (0.38 ± 0.13) mm in the control group, with a significant difference between the groups (P < 0.001). Axial elongation was not correlated with SER but had a negative correlation with initial age (ortho-k group: r s = -0.309, p < 0.01; control group: r s = -0.472, p < 0.01). The percentages of individuals with fast myopic progression (axial elongation > 0.36 mm per year) were 38.0 % among younger children (7.00 to 9.40 years) and 24.3 % among older children (9.40 to 12.00 years), whereas the respective percentages were 76.5 and 12.9 % in the control group. When SER ranged from -5.0D to -6.0D, the axial elongation in the ortho-k group was 57.1 % slower than that in the control group. Ortho-k lenses are effective in controlling myopic progression in Chinese children, particularly in younger children and in children with higher myopia.
NASA Astrophysics Data System (ADS)
Son, Young-Sun; Kim, Hyun-cheol
2018-05-01
Chlorophyll (Chl) concentration is one of the key indicators identifying changes in the Arctic marine ecosystem. However, current Chl algorithms are not accurate in the Arctic Ocean due to different bio-optical properties from those in the lower latitude oceans. In this study, we evaluated the current Chl algorithms and analyzed the cause of the error in the western coastal waters of Svalbard, which are known to be sensitive to climate change. The NASA standard algorithms showed to overestimate the Chl concentration in the region. This was due to the high non-algal particles (NAP) absorption and colored dissolved organic matter (CDOM) variability at the blue wavelength. In addition, at lower Chl concentrations (0.1-0.3 mg m-3), chlorophyll-specific absorption coefficients were ∼2.3 times higher than those of other Arctic oceans. This was another reason for the overestimation of Chl concentration. OC4 algorithm-based regionally tuned-Svalbard Chl (SC4) algorithm for retrieving more accurate Chl estimates reduced the mean absolute percentage difference (APD) error from 215% to 49%, the mean relative percentage difference (RPD) error from 212% to 16%, and the normalized root mean square (RMS) error from 211% to 68%. This region has abundant suspended matter due to the melting of tidal glaciers. We evaluated the performance of total suspended matter (TSM) algorithms. Previous published TSM algorithms generally overestimated the TSM concentration in this region. The Svalbard TSM-single band algorithm for low TSM range (ST-SB-L) decreased the APD and RPD errors by 52% and 14%, respectively, but the RMS error still remained high (105%).
Ten years in the library: new data confirm paleontological patterns
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)
1993-01-01
A comparison is made between compilations of times of origination and extinction of fossil marine animal families published in 1982 and 1992. As a result of ten years of library research, half of the information in the compendia has changed: families have been added and deleted, low-resolution stratigraphic data been improved, and intervals of origination and extinction have been altered. Despite these changes, apparent macroevolutionary patterns for the entire marine fauna have remained constant. Diversity curves compiled from the two data bases are very similar, with a goodness-of-fit of 99%; the principal difference is that the 1992 curve averages 13% higher than the older curve. Both numbers and percentages of origination and extinction also match well, with fits ranging from 83% to 95%. All major events of radiation and extinction are identical. Therefore, errors in large paleontological data bases and arbitrariness of included taxa are not necessarily impediments to the analysis of pattern in the fossil record, so long as the data are sufficiently numerous.
Total body composition by dual-photon (153Gd) absorptiometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazess, R.B.; Peppler, W.W.; Gibbons, M.
1984-10-01
The lean-fat composition (%FATR) of soft tissue and the mineral mass of the skeleton were determined in vivo using dual-photon (153Gd) absorptiometry (dose under 2 mrem). A rectilinear raster scan was made over the entire body in 18 subjects (14 female, 4 male). Single-photon absorptiometry (125I) measured bone mineral content on the radius. Percentage fat (%FATD) was determined in the same subjects using body density (from underwater weighing with correction for residual lung volume). Lean body mass (LBM) was determined using both %FATR and %FATD. Percentage fat from absorptiometry and from underwater density were correlated (r . 0.87). The deviationmore » of %FATD from %FATR was due to the amount of skeletal mineral as a percentage of the LBM (r . 0.90). Therefore, skeletal variability, even in normal subjects, where mineral ranges only from 4 to 8% of the LBM, essentially precludes use of body density as a composition indicator unless skeletal mass is measured. Anthropometry (fatfolds and weight) predicted %FATR and LBM at least as well as did underwater density. The predictive error of %FATR from fatfolds was 5% while the predictive error in predicting LBM from anthropometry was 2 to 3 kg (3%).« less
Developing a Measure of Traffic Calming Associated with Elementary School Students’ Active Transport
Nicholson, Lisa M.; Turner, Lindsey; Slater, Sandy J.; Abuzayd, Haytham; Chriqui, Jamie F.; Chaloupka, Frank
2014-01-01
The objective of this study is to develop a measure of traffic calming with nationally available GIS data from NAVTEQ and to validate the traffic calming index with the percentage of children reported by school administrators as walking or biking to school, using data from a nationally representative sample of elementary schools in 2006-2010. Specific models, with and without correlated errors, examined associations of objective GIS measures of the built environment, nationally available from NAVTEQ, with the latent construct of traffic calming. The best fit model for the latent traffic calming construct was determined to be a five factor model including objective measures of intersection density, count of medians/dividers, count of low mobility streets, count of roundabouts, and count of on-street parking availability, with no correlated errors among items. This construct also proved to be a good fit for the full measurement model when the outcome measure of percentage of students walking or biking to school was added to the model. The traffic calming measure was strongly, significantly, and positively correlated with the percentage of students reported as walking or biking to school. Applicability of results to public health and transportation policies and practices are discussed. PMID:25506255
Nicholson, Lisa M; Turner, Lindsey; Slater, Sandy J; Abuzayd, Haytham; Chriqui, Jamie F; Chaloupka, Frank
2014-12-01
The objective of this study is to develop a measure of traffic calming with nationally available GIS data from NAVTEQ and to validate the traffic calming index with the percentage of children reported by school administrators as walking or biking to school, using data from a nationally representative sample of elementary schools in 2006-2010. Specific models, with and without correlated errors, examined associations of objective GIS measures of the built environment, nationally available from NAVTEQ, with the latent construct of traffic calming. The best fit model for the latent traffic calming construct was determined to be a five factor model including objective measures of intersection density, count of medians/dividers, count of low mobility streets, count of roundabouts, and count of on-street parking availability, with no correlated errors among items. This construct also proved to be a good fit for the full measurement model when the outcome measure of percentage of students walking or biking to school was added to the model. The traffic calming measure was strongly, significantly, and positively correlated with the percentage of students reported as walking or biking to school. Applicability of results to public health and transportation policies and practices are discussed.
42 CFR 423.286 - Rules regarding premiums.
Code of Federal Regulations, 2011 CFR
2011-10-01
... section for the difference between the bid and the national average monthly bid amount, any supplemental... percentage as specified in paragraph (b) of this section; and (2) National average monthly bid amount... reflect difference between bid and national average bid. If the amount of the standardized bid amount...
NASA Astrophysics Data System (ADS)
Magnuson, Brian
A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The error of average energy gained through charge-gaining events is 31.3% and the error of average net energy consumed is 27.3%. The average delta and average standard deviation of the delta of predicted energy gained through charge-gaining events is 0.639 and 0.601 Wh respectively for individual time-series calculations. Similarly, the average delta and average standard deviation of the delta of the predicted net energy consumed is 0.567 and 0.580 Wh respectively for individual time-series calculations. The average delta and standard deviation of the delta of the predicted speed is 1.60 and 1.15 respectively also for the individual time-series measurements. The percentage of accuracy of route prediction is 91%. Overall, test routes are traversed 151 times for a total test distance of 276.4 km.
Andreski, Michael; Myers, Megan; Gainer, Kate; Pudlo, Anthony
Determine the effects of an 18-month pilot project using tech-check-tech in 7 community pharmacies on 1) rate of dispensing errors not identified during refill prescription final product verification; 2) pharmacist workday task composition; and 3) amount of patient care services provided and the reimbursement status of those services. Pretest-posttest quasi-experimental study where baseline and study periods were compared. Pharmacists and pharmacy technicians in 7 community pharmacies in Iowa. The outcome measures were 1) percentage of technician verified refill prescriptions where dispensing errors were not identified on final product verification; 2) percentage of time spent by pharmacists in dispensing, management, patient care, practice development, and other activities; 3) the number of pharmacist patient care services provided per pharmacist hours worked; and 4) percentage of time that technician product verification was used. There was no significant difference in overall errors (0.2729% vs. 0.5124%, P = 0.513), patient safety errors (0.0525% vs. 0.0651%, P = 0.837), or administrative errors (0.2204% vs. 0.4784%, P = 0.411). Pharmacist's time in dispensing significantly decreased (67.3% vs. 49.06%, P = 0.005), and time in direct patient care (19.96% vs. 34.72%, P = 0.003), increased significantly. Time in other activities did not significantly change. Reimbursable services per pharmacist hour (0.11 vs. 0.30, P = 0.129), did not significantly change. Non-reimbursable services increased significantly (2.77 vs. 4.80, P = 0.042). Total services significantly increased (2.88 vs. 5.16, P = 0.044). Pharmacy technician product verification of refill prescriptions preserved dispensing safety while significantly increasing the time spent in delivery of pharmacist provided patient care services. The total number of pharmacist services provided per hour also increased significantly, driven primarily by a significant increase in the number of non-reimbursed services. This was mostly likely due to the increased time available to provide patient care. Reimbursed services per hour did not increase significantly mostly likely due to lack of payers. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
Investigation of writing error in staggered heated-dot magnetic recording systems
NASA Astrophysics Data System (ADS)
Tipcharoen, W.; Warisarn, C.; Tongsomporn, D.; Karns, D.; Kovintavewat, P.
2017-05-01
To achieve an ultra-high storage capacity, heated-dot magnetic recording (HDMR) has been proposed, which heats a bit-patterned medium before recording data. Generally, an error during the HDMR writing process comes from several sources; however, we only investigate the effects of staggered island arrangement, island size fluctuation caused by imperfect fabrication, and main pole position fluctuation. Simulation results demonstrate that a writing error can be minimized by using a staggered array (hexagonal lattice) instead of a square array. Under the effect of main pole position fluctuation, the writing error is higher than the system without main pole position fluctuation. Finally, we found that the error percentage can drop below 10% when the island size is 8.5 nm and the standard deviation of the island size is 1 nm in the absence of main pole jitter.
Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches
NASA Technical Reports Server (NTRS)
Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.
Quantifying the Validity of Routine Neonatal Healthcare Data in the Greater Accra Region, Ghana
Kayode, Gbenga A.; Amoakoh-Coleman, Mary; Brown-Davies, Charles; Grobbee, Diederick E.; Agyepong, Irene Akua; Ansah, Evelyn; Klipstein-Grobusch, Kerstin
2014-01-01
Objectives The District Health Information Management System–2 (DHIMS–2) is the database for storing health service data in Ghana, and similar to other low and middle income countries, paper-based data collection is being used by the Ghana Health Service. As the DHIMS-2 database has not been validated before this study aimed to evaluate its validity. Methods Seven out of ten districts in the Greater Accra Region were randomly sampled; the district hospital and a polyclinic in each district were recruited for validation. Seven pre-specified neonatal health indicators were considered for validation: antenatal registrants, deliveries, total births, live birth, stillbirth, low birthweight, and neonatal death. Data were extracted on these health indicators from the primary data (hospital paper-registers) recorded from January to March 2012. We examined all the data captured during this period as these data have been uploaded to the DHIMS-2 database. The differences between the values of the health indicators obtained from the primary data and that of the facility and DHIMS–2 database were used to assess the accuracy of the database while its completeness was estimated by the percentage of missing data in the primary data. Results About 41,000 data were assessed and in almost all the districts, the error rates of the DHIMS-2 data were less than 2.1% while the percentages of missing data were below 2%. At the regional level, almost all the health indicators had an error rate below 1% while the overall error rate of the DHIMS-2 database was 0.68% (95% C I = 0.61–0.75) and the percentage of missing data was 3.1% (95% C I = 2.96–3.24). Conclusion This study demonstrated that the percentage of missing data in the DHIMS-2 database was negligible while its accuracy was close to the acceptable range for high quality data. PMID:25144222
Sam, Aaseer Thamby; Lian Jessica, Looi Li; Parasuraman, Subramani
2015-01-01
Objectives: To retrospectively determine the extent and types of adverse drug events (ADEs) from the patient cases sheets and identify the contributing factors of medication errors. To assess causality and severity using the World Health Organization (WHO) probability scale and Hartwig's scale, respectively. Methods: Hundred patient case sheets were randomly selected, modified version of the Institute for Healthcare Improvement (IHI) Global Trigger Tool was utilized to identify the ADEs; causality and severity were calculated utilizing the WHO probability scale and Hartwig's severity assessment scale, respectively. Results: In total, 153 adverse events (AEs) were identified using the IHI Global Trigger Tool. Majority of the AEs are due to medication errors (46.41%) followed by 60 adverse drug reactions (ADRs), 15 therapeutic failure incidents, and 7 over-dose cases. Out of the 153 AEs, 60 are due to ADRs such as rashes, nausea, and vomiting. Therapeutic failure contributes 9.80% of the AEs, while overdose contributes to 4.58% of the total 153 AEs. Using the trigger tools, we were able to detect 45 positive triggers in 36 patient records. Among it, 19 AEs were identified in 15 patient records. The percentage of AE/100 patients is 17%. The average ADEs/1000 doses is 2.03% (calculated). Conclusion: The IHI Global Trigger Tool is an effective method to aid provisionally-registered pharmacists to identify ADEs quicker. PMID:25767366
Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Liu, Sheng
2016-06-01
Tracking the position of pedestrian is urgently demanded when the most commonly used GPS (Global Position System) is unavailable. Benefited from the small size, low-power consumption, and relatively high reliability, micro-electro-mechanical system sensors are well suited for GPS-denied indoor pedestrian heading estimation. In this paper, a real-time miniature orientation determination system (MODS) was developed for indoor heading and trajectory tracking based on a novel dual-linear Kalman filter. The proposed filter precludes the impact of geomagnetic distortions on pitch and roll that the heading is subjected to. A robust calibration approach was designed to improve the accuracy of sensors measurements based on a unified sensor model. Online tests were performed on the MODS with an improved turntable. The results demonstrate that the average RMSE (root-mean-square error) of heading estimation is less than 1°. Indoor heading experiments were carried out with the MODS mounted on the shoe of pedestrian. Besides, we integrated the existing MODS into an indoor pedestrian dead reckoning application as an example of its utility in realistic actions. A human attitude-based walking model was developed to calculate the walking distance. Test results indicate that mean percentage error of indoor trajectory tracking achieves 2% of the total walking distance. This paper provides a feasible alternative for accurate indoor heading and trajectory tracking.
Estimating tree bole volume using artificial neural network models for four species in Turkey.
Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V
2010-01-01
Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.
Microgrid optimal scheduling considering impact of high penetration wind generation
NASA Astrophysics Data System (ADS)
Alanazi, Abdulaziz
The objective of this thesis is to study the impact of high penetration wind energy in economic and reliable operation of microgrids. Wind power is variable, i.e., constantly changing, and nondispatchable, i.e., cannot be controlled by the microgrid controller. Thus an accurate forecasting of wind power is an essential task in order to study its impacts in microgrid operation. Two commonly used forecasting methods including Autoregressive Integrated Moving Average (ARIMA) and Artificial Neural Network (ANN) have been used in this thesis to improve the wind power forecasting. The forecasting error is calculated using a Mean Absolute Percentage Error (MAPE) and is improved using the ANN. The wind forecast is further used in the microgrid optimal scheduling problem. The microgrid optimal scheduling is performed by developing a viable model for security-constrained unit commitment (SCUC) based on mixed-integer linear programing (MILP) method. The proposed SCUC is solved for various wind penetration levels and the relationship between the total cost and the wind power penetration is found. In order to reduce microgrid power transfer fluctuations, an additional constraint is proposed and added to the SCUC formulation. The new constraint would control the time-based fluctuations. The impact of the constraint on microgrid SCUC results is tested and validated with numerical analysis. Finally, the applicability of proposed models is demonstrated through numerical simulations.
Leak detection in medium density polyethylene (MDPE) pipe using pressure transient method
NASA Astrophysics Data System (ADS)
Amin, M. M.; Ghazali, M. F.; PiRemli, M. A.; Hamat, A. M. A.; Adnan, N. F.
2015-12-01
Water is an essential part of commodity for a daily life usage for an average person, from personal uses such as residential or commercial consumers to industries utilization. This study emphasizes on detection of leaking in medium density polyethylene (MDPE) pipe using pressure transient method. This type of pipe is used to analyze the position of the leakage in the pipeline by using Ensemble Empirical Mode Decomposition Method (EEMD) with signal masking. Water hammer would induce an impulse throughout the pipeline that caused the system turns into a surge of water wave. Thus, solenoid valve is used to create a water hammer through the pipelines. The data from the pressure sensor is collected using DASYLab software. The data analysis of the pressure signal will be decomposed into a series of wave composition using EEMD signal masking method in matrix laboratory (MATLAB) software. The series of decomposition of signals is then carefully selected which reflected intrinsic mode function (IMF). These IMFs will be displayed by using a mathematical algorithm, known as Hilbert transform (HT) spectrum. The IMF signal was analysed to capture the differences. The analyzed data is compared with the actual measurement of the leakage in term of percentage error. The error recorded is below than 1% and it is proved that this method highly reliable and accurate for leak detection.
Changes in mineral composition of eggshells from black ducks and mallards fed DDE in the diet
Longcore, J.R.; Samson, F.B.; Kreitzer, J.F.; Spann, J.W.
1971-01-01
Diets containing 10 and 30 ppm (dry weight) DDE were fed to black ducks, and diets containing 1, 5, and 10 ppm (dry weight) DDE were fed to mallards. Among the results were the following changes in black duck eggshell composition: (a) significant increase in the percentage of Mg, (b) significant decreases in Ba and Sr, (c) increases (which approached significance) in average percentage of eggshell Na and Cu, (d) a decrease in shell Ca which approached significance, (e) patterns of mineral correlations which in some instances were distinct to dosage groups, and (f) inverse correlations in the control group between eggshell thickness Mg and Na. Changes in mallard eggshells were: (a) significant increase in percentage of magnesium at 5 and 10 ppm DDE, (b) significant decrease in Al at 5 and 10 ppm DDE, (c) a significant decrease in Ca from eggshells from the 10 ppm DDE group, and (d) an increase in average percentage of Na in eggshells from DDE dosed ducks which approached significance.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
ERIC Educational Resources Information Center
Gaines, Gale F.
This report presents teacher salary data from the Southern Regional Education Board (SREB). There is a gap between SREB states' average teacher salaries and the national average. Over the last 5 years, SREB teacher salaries increased by an average of 14.4 percent; the national increase was nearly 2 percentage points lower. Georgia and North…
7 CFR 1714.8 - Hardship rate loans.
Code of Federal Regulations, 2011 CFR
2011-01-01
... either the average per capita income of the residents receiving electric service from the borrower is less than the average per capita income of the residents of the state in which the borrower provides... the consumer income tests will be determined on a weighted average based on the percentage of the...
Hammami, Naïma; Mertens, Karl; Overholser, Rosanna; Goetghebeur, Els; Catry, Boudewijn; Lambert, Marie-Laurence
2016-05-01
Surveillance of central-line-associated bloodstream infections requires the labor-intensive counting of central-line days (CLDs). This workload could be reduced by sampling. Our objective was to evaluate the accuracy of various sampling strategies in the estimation of CLDs in intensive care units (ICUs) and to establish a set of rules to identify optimal sampling strategies depending on ICU characteristics. Analyses of existing data collected according to the European protocol for patient-based surveillance of ICU-acquired infections in Belgium between 2004 and 2012. CLD data were reported by 56 ICUs in 39 hospitals during 364 trimesters. We compared estimated CLD data obtained from weekly and monthly sampling schemes with the observed exhaustive CLD data over the trimester by assessing the CLD percentage error (ie, observed CLDs - estimated CLDs/observed CLDs). We identified predictors of improved accuracy using linear mixed models. When sampling once per week or 3 times per month, 80% of ICU trimesters had a CLD percentage error within 10%. When sampling twice per week, this was >90% of ICU trimesters. Sampling on Tuesdays provided the best estimations. In the linear mixed model, the observed CLD count was the best predictor for a smaller percentage error. The following sampling strategies provided an estimate within 10% of the actual CLD for 97% of the ICU trimesters with 90% confidence: 3 times per month in an ICU with >650 CLDs per trimester or each Tuesday in an ICU with >480 CLDs per trimester. Sampling of CLDs provides an acceptable alternative to daily collection of CLD data.
Are Nomothetic or Ideographic Approaches Superior in Predicting Daily Exercise Behaviors?
Cheung, Ying Kuen; Hsueh, Pei-Yun Sabrina; Qian, Min; Yoon, Sunmoo; Meli, Laura; Diaz, Keith M; Schwartz, Joseph E; Kronish, Ian M; Davidson, Karina W
2017-01-01
The understanding of how stress influences health behavior can provide insights into developing healthy lifestyle interventions. This understanding is traditionally attained through observational studies that examine associations at a population level. This nomothetic approach, however, is fundamentally limited by the fact that the environment- person milieu that constitutes stress exposure and experience can vary substantially between individuals, and the modifiable elements of these exposures and experiences are individual-specific. With recent advances in smartphone and sensing technologies, it is now possible to conduct idiographic assessment in users' own environment, leveraging the full-range observations of actions and experiences that result in differential response to naturally occurring events. The aim of this paper is to explore the hypothesis that an ideographic N-of-1 model can better capture an individual's stress- behavior pathway (or the lack thereof) and provide useful person-specific predictors of exercise behavior. This paper used the data collected in an observational study in 79 participants who were followed for up to a 1-year period, wherein their physical activity was continuously and objectively monitored by actigraphy and their stress experience was recorded via ecological momentary assessment on a mobile app. In addition, our analyses considered exogenous and environmental variables retrieved from public archive such as day in a week, daylight time, temperature and precipitation. Leveraging the multiple data sources, we developed prediction algorithms for exercise behavior using random forest and classification tree techniques using a nomothetic approach and an N-of-1 approach. The two approaches were compared based on classification errors in predicting personalized exercise behavior. Eight factors were selected by random forest for the nomothetic decision model, which was used to predict whether a participant would exercise on a particular day. The predictors included previous exercise behavior, emotional factors (e.g., midday stress), external factors such as weather (e.g., temperature), and self-determination factors (e.g., expectation of exercise). The nomothetic model yielded an average classification error of 36%. The ideographic N-of-1 models used on average about two predictors for each individual, and had an average classification error of 25%, which represented an improvement of 11 percentage points. Compared to the traditional one-size-fits-all, nomothetic model that generalizes population-evidence for individuals, the proposed N-of-1 model can better capture the individual difference in their stressbehavior pathways. In this paper, we demonstrate it is feasible to perform personalized exercise behavior prediction, mainly made possible by mobile health technology and machine learning analytics. Schattauer GmbH.
NASA Technical Reports Server (NTRS)
Maslanik, J. A.
1992-01-01
Effects of wind, water vapor, and cloud liquid water on ice concentration and ice type calculated from passive microwave data are assessed through radiative transfer calculations and observations. These weather effects can cause overestimates in ice concentration and more substantial underestimates in multi-year ice percentage by decreasing polarization and by decreasing the gradient between frequencies. The effect of surface temperature and air temperature on the magnitudes of weather-related errors is small for ice concentration and substantial for multiyear ice percentage. The existing weather filter in the NASA Team Algorithm addresses only weather effects over open ocean; the additional use of local open-ocean tie points and an alternative weather correction for the marginal ice zone can further reduce errors due to weather. Ice concentrations calculated using 37 versus 18 GHz data show little difference in total ice covered area, but greater differences in intermediate concentration classes. Given the magnitude of weather-related errors in ice classification from passive microwave data, corrections for weather effects may be necessary to detect small trends in ice covered area and ice type for climate studies.
McCowan, Peter M; Asuni, Ganiyu; Van Uytven, Eric; VanBeek, Timothy; McCurdy, Boyd M C; Loewen, Shaun K; Ahmed, Naseer; Bashir, Bashir; Butler, James B; Chowdhury, Amitava; Dubey, Arbind; Leylek, Ahmet; Nashed, Maged
2017-04-01
To report findings from an in vivo dosimetry program implemented for all stereotactic body radiation therapy patients over a 31-month period and discuss the value and challenges of utilizing in vivo electronic portal imaging device (EPID) dosimetry clinically. From December 2013 to July 2016, 117 stereotactic body radiation therapy-volumetric modulated arc therapy patients (100 lung, 15 spine, and 2 liver) underwent 602 EPID-based in vivo dose verification events. A developed model-based dose reconstruction algorithm calculates the 3-dimensional dose distribution to the patient by back-projecting the primary fluence measured by the EPID during treatment. The EPID frame-averaging was optimized in June 2015. For each treatment, a 3%/3-mm γ comparison between our EPID-derived dose and the Eclipse AcurosXB-predicted dose to the planning target volume (PTV) and the ≥20% isodose volume were performed. Alert levels were defined as γ pass rates <85% (lung and liver) and <80% (spine). Investigations were carried out for all fractions exceeding the alert level and were classified as follows: EPID-related, algorithmic, patient setup, anatomic change, or unknown/unidentified errors. The percentages of fractions exceeding the alert levels were 22.6% for lung before frame-average optimization and 8.0% for lung, 20.0% for spine, and 10.0% for liver after frame-average optimization. Overall, mean (± standard deviation) planning target volume γ pass rates were 90.7% ± 9.2%, 87.0% ± 9.3%, and 91.2% ± 3.4% for the lung, spine, and liver patients, respectively. Results from the clinical implementation of our model-based in vivo dose verification method using on-treatment EPID images is reported. The method is demonstrated to be valuable for routine clinical use for verifying delivered dose as well as for detecting errors. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCowan, Peter M., E-mail: pmccowan@cancercare.mb.ca; Asuni, Ganiyu; Van Uytven, Eric
Purpose: To report findings from an in vivo dosimetry program implemented for all stereotactic body radiation therapy patients over a 31-month period and discuss the value and challenges of utilizing in vivo electronic portal imaging device (EPID) dosimetry clinically. Methods and Materials: From December 2013 to July 2016, 117 stereotactic body radiation therapy–volumetric modulated arc therapy patients (100 lung, 15 spine, and 2 liver) underwent 602 EPID-based in vivo dose verification events. A developed model-based dose reconstruction algorithm calculates the 3-dimensional dose distribution to the patient by back-projecting the primary fluence measured by the EPID during treatment. The EPID frame-averaging was optimized in Junemore » 2015. For each treatment, a 3%/3-mm γ comparison between our EPID-derived dose and the Eclipse AcurosXB–predicted dose to the planning target volume (PTV) and the ≥20% isodose volume were performed. Alert levels were defined as γ pass rates <85% (lung and liver) and <80% (spine). Investigations were carried out for all fractions exceeding the alert level and were classified as follows: EPID-related, algorithmic, patient setup, anatomic change, or unknown/unidentified errors. Results: The percentages of fractions exceeding the alert levels were 22.6% for lung before frame-average optimization and 8.0% for lung, 20.0% for spine, and 10.0% for liver after frame-average optimization. Overall, mean (± standard deviation) planning target volume γ pass rates were 90.7% ± 9.2%, 87.0% ± 9.3%, and 91.2% ± 3.4% for the lung, spine, and liver patients, respectively. Conclusions: Results from the clinical implementation of our model-based in vivo dose verification method using on-treatment EPID images is reported. The method is demonstrated to be valuable for routine clinical use for verifying delivered dose as well as for detecting errors.« less
Time series analysis of temporal networks
NASA Astrophysics Data System (ADS)
Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh
2016-01-01
A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.
Kehl, Sven; Eckert, Sven; Sütterlin, Marc; Neff, K Wolfgang; Siemer, Jörn
2011-06-01
Three-dimensional (3D) sonographic volumetry is established in gynecology and obstetrics. Assessment of the fetal lung volume by magnetic resonance imaging (MRI) in congenital diaphragmatic hernias has become a routine examination. In vitro studies have shown a good correlation between 3D sonographic measurements and MRI. The aim of this study was to compare the lung volumes of healthy fetuses assessed by 3D sonography to MRI measurements and to investigate the impact of different rotation angles. A total of 126 fetuses between 20 and 40 weeks' gestation were measured by 3D sonography, and 27 of them were also assessed by MRI. The sonographic volumes were calculated by the rotational technique (virtual organ computer-aided analysis) with rotation angles of 6° and 30°. To evaluate the accuracy of 3D sonographic volumetry, percentage error and absolute percentage error values were calculated using MRI volumes as reference points. Formulas to calculate total, right, and left fetal lung volumes according to gestational age and biometric parameters were derived by stepwise regression analysis. Three-dimensional sonographic volumetry showed a high correlation compared to MRI (6° angle, R(2) = 0.971; 30° angle, R(2) = 0.917) with no systematic error for the 6° angle. Moreover, using the 6° rotation angle, the median absolute percentage error was significantly lower compared to the 30° angle (P < .001). The new formulas to calculate total lung volume in healthy fetuses only included gestational age and no biometric parameters (R(2) = 0.853). Three-dimensional sonographic volumetry of lung volumes in healthy fetuses showed a good correlation with MRI. We recommend using an angle of 6° because it assessed the lung volume more accurately. The specifically designed equations help estimate lung volumes in healthy fetuses.
MO-D-213-05: Sensitivity of Routine IMRT QA Metrics to Couch and Collimator Rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alaei, P
Purpose: To assess the sensitivity of gamma index and other IMRT QA metrics to couch and collimator rotations. Methods: Two brain IMRT plans with couch and/or collimator rotations in one or more of the fields were evaluated using the IBA MatriXX ion chamber array and its associated software (OmniPro-I’mRT). The plans were subjected to routine QA by 1) Creating a composite planar dose in the treatment planning system (TPS) with the couch/collimator rotations and 2) Creating the planar dose after “zeroing” the rotations. Plan deliveries to MatriXX were performed with all rotations set to zero on a Varian 21ex linearmore » accelerator. This in effect created TPS-created planar doses with an induced rotation error. Point dose measurements for the delivered plans were also performed in a solid water phantom. Results: The IMRT QA of the plans with couch and collimator rotations showed clear discrepancies in the planar dose and 2D dose profile overlays. The gamma analysis, however, did pass with the criteria of 3%/3mm (for 95% of the points), albeit with a lower percentage pass rate, when one or two of the fields had a rotation. Similar results were obtained with tighter criteria of 2%/2mm. Other QA metrics such as percentage difference or distance-to-agreement (DTA) histograms produced similar results. The point dose measurements did not obviously indicate the error due to location of dose measurement (on the central axis) and the size of the ion chamber used (0.6 cc). Conclusion: Relying on Gamma analysis, percentage difference, or DTA to determine the passing of an IMRT QA may miss critical errors in the plan delivery due to couch/collimator rotations. A combination of analyses for composite QA plans, or per-beam analysis, would detect these errors.« less
12 CFR 327.9 - Assessment risk categories and pricing methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and a weighted average of CAMELS component ratings will be multiplied by a corresponding pricing... CAMELS component ratings is created by multiplying each component by the following percentages and adding... CAMELS Component Rating 1.095 * Ratios are expressed as percentages. ** Multipliers are rounded to three...
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
Global Surface Temperature Change and Uncertainties Since 1861
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)
2002-01-01
The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.
Time series model for forecasting the number of new admission inpatients.
Zhou, Lingling; Zhao, Ping; Wu, Dongdong; Cheng, Cheng; Huang, Hao
2018-06-15
Hospital crowding is a rising problem, effective predicting and detecting managment can helpful to reduce crowding. Our team has successfully proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in the schistosomiasis and hand, foot, and mouth disease forecasting study. In this paper, our aim is to explore the application of the hybrid ARIMA-NARNN model to track the trends of the new admission inpatients, which provides a methodological basis for reducing crowding. We used the single seasonal ARIMA (SARIMA), NARNN and the hybrid SARIMA-NARNN model to fit and forecast the monthly and daily number of new admission inpatients. The root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to compare the forecasting performance among the three models. The modeling time range of monthly data included was from January 2010 to June 2016, July to October 2016 as the corresponding testing data set. The daily modeling data set was from January 4 to September 4, 2016, while the testing time range included was from September 5 to October 2, 2016. For the monthly data, the modeling RMSE and the testing RMSE, MAE and MAPE of SARIMA-NARNN model were less than those obtained from the single SARIMA or NARNN model, but the MAE and MAPE of modeling performance of SARIMA-NARNN model did not improve. For the daily data, all RMSE, MAE and MAPE of NARNN model were the lowest both in modeling stage and testing stage. Hybrid model does not necessarily outperform its constituents' performances. It is worth attempting to explore the reliable model to forecast the number of new admission inpatients from different data.
A hybrid SVM-FFA method for prediction of monthly mean global solar radiation
NASA Astrophysics Data System (ADS)
Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Zamani, Mazdak; Motamedi, Shervin; Ch, Sudheer
2016-07-01
In this study, a hybrid support vector machine-firefly optimization algorithm (SVM-FFA) model is proposed to estimate monthly mean horizontal global solar radiation (HGSR). The merit of SVM-FFA is assessed statistically by comparing its performance with three previously used approaches. Using each approach and long-term measured HGSR, three models are calibrated by considering different sets of meteorological parameters measured for Bandar Abbass situated in Iran. It is found that the model (3) utilizing the combination of relative sunshine duration, difference between maximum and minimum temperatures, relative humidity, water vapor pressure, average temperature, and extraterrestrial solar radiation shows superior performance based upon all approaches. Moreover, the extraterrestrial radiation is introduced as a significant parameter to accurately estimate the global solar radiation. The survey results reveal that the developed SVM-FFA approach is greatly capable to provide favorable predictions with significantly higher precision than other examined techniques. For the SVM-FFA (3), the statistical indicators of mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and coefficient of determination ( R 2) are 3.3252 %, 0.1859 kWh/m2, 3.7350 %, and 0.9737, respectively which according to the RRMSE has an excellent performance. As a more evaluation of SVM-FFA (3), the ratio of estimated to measured values is computed and found that 47 out of 48 months considered as testing data fall between 0.90 and 1.10. Also, by performing a further verification, it is concluded that SVM-FFA (3) offers absolute superiority over the empirical models using relatively similar input parameters. In a nutshell, the hybrid SVM-FFA approach would be considered highly efficient to estimate the HGSR.
Comparison of pitot traverses taken at varying distances downstream of obstructions.
Guffey, S E; Booth, D W
1999-01-01
This study determined the deviations between pitot traverses taken under "ideal" conditions--at least seven duct diameter's lengths (i.e., distance = 7D) from obstructions, elbows, junction fittings, and other disturbances to flows--with those taken downstream from commonplace disturbances. Two perpendicular 10-point, log-linear velocity pressure traverses were taken at various distances downstream of tested upstream conditions. Upstream conditions included a plain duct opening, a junction fitting, a single 90 degrees elbow, and two elbows rotated 90 degrees from each other into two orthogonal planes. Airflows determined from those values were compared with the values measured more than 40D downstream of the same obstructions under ideal conditions. The ideal measurements were taken on three traverse diameters in the same plane separated by 120 degrees in honed drawn-over-mandrel tubing. In all cases the pitot tubes were held in place by devices that effectively eliminated alignment errors and insertion depth errors. Duct velocities ranged from 1500 to 4500 ft/min. Results were surprisingly good if one employed two perpendicular traverses. When the averages of two perpendicular traverses was taken, deviations from ideal value were 6% or less even for traverses taken as close as 2D distance from the upstream disturbances. At 3D distance, deviations seldom exceeded 5%. With single diameter traverses, errors seldom exceeded 5% at 6D or more downstream from the disturbance. Interestingly, percentage deviations were about the same at high and low velocities. This study demonstrated that two perpendicular pitot traverses can be taken as close as 3D from these disturbances with acceptable (< or = 5%) deviations from measurements taken under ideal conditions.
National trends in safety performance of electronic health record systems in children's hospitals.
Chaparro, Juan D; Classen, David C; Danforth, Melissa; Stockwell, David C; Longhurst, Christopher A
2017-03-01
To evaluate the safety of computerized physician order entry (CPOE) and associated clinical decision support (CDS) systems in electronic health record (EHR) systems at pediatric inpatient facilities in the US using the Leapfrog Group's pediatric CPOE evaluation tool. The Leapfrog pediatric CPOE evaluation tool, a previously validated tool to assess the ability of a CPOE system to identify orders that could potentially lead to patient harm, was used to evaluate 41 pediatric hospitals over a 2-year period. Evaluation of the last available test for each institution was performed, assessing performance overall as well as by decision support category (eg, drug-drug, dosing limits). Longitudinal analysis of test performance was also carried out to assess the impact of testing and the overall trend of CPOE performance in pediatric hospitals. Pediatric CPOE systems were able to identify 62% of potential medication errors in the test scenarios, but ranged widely from 23-91% in the institutions tested. The highest scoring categories included drug-allergy interactions, dosing limits (both daily and cumulative), and inappropriate routes of administration. We found that hospitals with longer periods since their CPOE implementation did not have better scores upon initial testing, but after initial testing there was a consistent improvement in testing scores of 4 percentage points per year. Pediatric computerized physician order entry (CPOE) systems on average are able to intercept a majority of potential medication errors, but vary widely among implementations. Prospective and repeated testing using the Leapfrog Group's evaluation tool is associated with improved ability to intercept potential medication errors. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Zeng, Yi; Land, Kenneth C.; Wang, Zhenglian; Gu, Danan
2012-01-01
This article presents the core methodological ideas, empirical assessments, and applications of an extended cohort-component approach (known as the “ProFamy model”) to simultaneously project household composition, living arrangements, and population sizes at the subnational level in the United States. Comparisons of projections from 1990 to 2000 using this approach with census counts in 2000 for each of the 50 states and Washington, DC show that 68.0 %, 17.0 %, 11.2 %, and 3.8 % of the absolute percentage errors are <3.0 %, 3.0 % to 4.99 %, 5.0 % to 9.99 %, and ≥10.0 %, respectively. Another analysis compares average forecast errors between the extended cohort-component approach and the still widely used classic headship-rate method, by projecting number-of-bedrooms–specific housing demands from 1990 to 2000 and then comparing those projections with census counts in 2000 for each of the 50 states and Washington, DC. The results demonstrate that, compared with the extended cohort-component approach, the headship-rate method produces substantially more serious forecast errors because it cannot project households by size while the extended cohort-component approach projects detailed household sizes. We also present illustrative household and living arrangement projections for the five decades from 2000 to 2050, with medium-, small-, and large-family scenarios for each of the 50 states; Washington, DC; six counties of southern California, and the Minneapolis–St. Paul metropolitan area. Among many interesting numerical outcomes of household and living arrangement projections with medium, low, and high bounds, the aging of American households over the next few decades across all states/areas is particularly striking. Finally, the limitations of the present study and potential future lines of research are discussed. PMID:23208782
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer
NASA Astrophysics Data System (ADS)
Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.
2018-03-01
Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.
Field and laboratory procedures used in a soil chronosequence study
Singer, Michael J.; Janitzky, Peter
1986-01-01
In 1978, the late Denis Marchand initiated a research project entitled "Soil Correlation and Dating at the U.S. Geological Survey" to determine the usefulness of soils in solving geologic problems. Marchand proposed to establish soil chronosequences that could be dated independently of soil development by using radiometric and other numeric dating methods. In addition, by comparing dated chronosequences in different environments, rates of soil development could be studied and compared among varying climates and mineralogical conditions. The project was fundamental in documenting the value of soils in studies of mapping, correlating, and dating late Cenozoic deposits and in studying soil genesis. All published reports by members of the project are included in the bibliography.The project demanded that methods be adapted or developed to ensure comparability over a wide variation in soil types. Emphasis was placed on obtaining professional expertise and on establishing consistent techniques, especially for the field, laboratory, and data-compilation methods. Since 1978, twelve chronosequences have been sampled and analyzed by members of this project, and methods have been established and used consistently for analysis of the samples.The goals of this report are to:Document the methods used for the study on soil chronosequences,Present the results of tests that were run for precision, accuracy, and effectiveness, andDiscuss our modifications to standard procedures.Many of the methods presented herein are standard and have been reported elsewhere. However, we assume less prior analytical knowledge in our descriptions; thus, the manual should be easy to follow for the inexperienced analyst. Each chapter presents one or more references of the basic principle, an equipment and reagents list, and the detailed procedure. In some chapters this is followed by additional remarks or example calculations.The flow diagram in figure 1 outlines the step-by-step procedures used to obtain and analyze soil samples for this study. The soils analyzed had a wide range of characteristics (such as clay content, mineralogy, salinity, and acidity). Initially, a major task was to test and select methods that could be applied and interpreted similarly for the various types of soils. Tests were conducted to establish the effectiveness and comparability of analytical techniques, and the data for such tests are included in figures, tables, and discussions. In addition, many replicate analyses of samples have established a "standard error" or "coefficient of variance" which indicates the average reproducibility of each laboratory procedure. These averaged errors are reported as percentage of a given value. For example, in particle-size determination, 3 percent error for 10 percent clay content equals 10 ± 0.3 percent clay. The error sources were examined to determine, for example, if the error in particle-size determination was dependent on clay content. No such biases were found, and data are reported as percent error in the text and in tables of reproducibility.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Handgrip force steadiness in young and older adults: a reproducibility study.
Blomkvist, Andreas W; Eika, Fredrik; de Bruin, Eling D; Andersen, Stig; Jorgensen, Martin
2018-04-02
Force steadiness is a quantitative measure of the ability to control muscle tonus. It is an independent predictor of functional performance and has shown to correlate well with different degrees of motor impairment following stroke. Despite being clinically relevant, few studies have assessed the validity of measuring force steadiness. The aim of this study was to explore the reproducibility of handgrip force steadiness, and to assess age difference in steadiness. Intrarater reproducibility (the degree to which a rating gives consistent result on separate occasions) was investigated in a test-retest design with seven days between sessions. Ten young and thirty older adults were recruited and handgrip steadiness was tested at 5%, 10% and 25% of maximum voluntary contraction (MVC) using Nintendo Wii Balance Board (WBB). Coefficients of variation were calculated from the mean force produced (CVM) and the target force (CVT). Area between the force curve and the target force line (Area) was also calculated. For the older adults we explored reliability using intraclass correlation coefficient (ICC) and agreement using standard error of measurement (SEM), limits of agreement (LOA) and smallest real difference (SRD). A systematic improvement in handgrip steadiness was found between sessions for all measures (CVM, CVT, Area). CVM and CVT at 5% of MVC showed good to high reliability, while Area had poor reliability for all percentages of MVC. Averaged ICC for CVM, CVT and Area was 0.815, 0.806 and 0.464, respectively. Averaged ICC on 5%, 10%, and 25% of MVC was 0.751, 0.667 and 0.668, respectively. Measures of agreement showed similar trends with better results for CVM and CVT than for Area. Young adults had better handgrip steadiness than older adults across all measures. The CVM and CVT measures demonstrated good reproducibility at lower percentages of MVC using the WBB, and could become relevant measures in the clinical setting. The Area measure had poor reproducibility. Young adults have better handgrip steadiness than old adults.
Validation of GPU based TomoTherapy dose calculation engine.
Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond
2012-04-01
The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) < 1. The worst case observed in the phantom had 0.22% voxels violating the criterion. In patient cases, the worst percentage of voxels violating the criterion was 0.57%. For absolute point dose verification, all cases agreed with measurement to within ±3% with average error magnitude within 1%. All cases passed the acceptance criterion that more than 95% of the pixels have Γ(3%, 3 mm) < 1 in film measurement, and the average passing pixel percentage is 98.5%-99%. The GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.
5 CFR 340.101 - Principal statutory requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Findings and Purpose Sec. 2. (a) The Congress finds that— (1) many individuals in our society possess great... be equal to the percentage which bears the same ratio to the percentage determined under this... scheduled workweek bears to the average number of hours in the regularly scheduled workweek of an employee...
Vital Statistics for Ohio Appalachian School Districts, Fiscal Year 1999.
ERIC Educational Resources Information Center
Ohio Univ., Athens. Coalition of Rural and Appalachian Schools.
This document compiles school district data on 18 factors for the 29 southeastern Ohio counties designated as "Appalachian." Data tables present state means, Appalachian means and ranges, and individual district data for fall enrollment; percentage of minority students; percentage of Aid to Dependent Children; average income; property…
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.
Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F
2010-06-01
Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.
NASA Astrophysics Data System (ADS)
Liliawati, W.; Utama, J. A.; Fauziah, H.
2016-08-01
The curriculum in Indonesia recommended that science teachers in the elementary and intermediate schools should have interdisciplinary ability in science. However, integrated learning still has not been implemented optimally. This research is designing and applying integrated learning with Susan Loucks-Horsley model in light pollution theme. It can be showed how the student's achievements based on new taxonomy of science education with five domains: knowing & understanding, science process skill, creativity, attitudinal and connecting & applying. This research use mixed methods with concurrent embedded design. The subject is grade 8 of junior high school students in Bandung as many as 27 students. The Instrument have been employed has 28 questions test mastery of concepts, observations sheet and moral dilemma test. The result shows that integrated learning with model Susan Loucks-Horsley is able to increase student's achievement and positive characters on light pollution theme. As the results are the average normalized gain of knowing and understanding domain reach in lower category, the average percentage of science process skill domain reach in good category, the average percentage of creativity and connecting domain reach respectively in good category and attitudinal domain the average percentage is over 75% in moral knowing and moral feeling.
NASA Technical Reports Server (NTRS)
Piersol, Allan G.
1991-01-01
Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.
The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors
NASA Technical Reports Server (NTRS)
Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan
1993-01-01
Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.
The Whole Warps the Sum of Its Parts.
Corbett, Jennifer E
2017-01-01
The efficiency of averaging properties of sets without encoding redundant details is analogous to gestalt proposals that perception is parsimoniously organized as a function of recurrent order in the world. This similarity suggests that grouping and averaging are part of a broader set of strategies allowing the visual system to circumvent capacity limitations. To examine how gestalt grouping affects the manner in which information is averaged and remembered, I compared the error in observers' adjustments of remembered sizes of individual circles in two different mean-size sets defined by similarity, proximity, connectedness, or a common region. Overall, errors were more similar within the same gestalt-defined groups than between different gestalt-defined groups, such that the remembered sizes of individual circles were biased toward the mean size of their respective gestalt-defined groups. These results imply that gestalt grouping facilitates perceptual averaging to minimize the error with which individual items are encoded, thereby optimizing the efficiency of visual short-term memory.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Ten Essential Concepts for Remediation in Mathematics.
ERIC Educational Resources Information Center
Roseman, Louis
1985-01-01
Ten crucial mathematical concepts with which errors are made are listed, with methods used to teach them to high school students. The concepts concern order, place values, inverse operations, multiplication and division, remainders, identity elements, fractions, conversions, decimal points, and percentages. (MNS)
Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi
2012-09-01
Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.
Demerath, E W; Guo, S S; Chumlea, W C; Towne, B; Roche, A F; Siervogel, R M
2002-03-01
The purpose of the study was to compare estimates of body density and percentage body fat from air displacement plethysmography (ADP) to those from hydrodensitometry (HD) in adults and children and to provide a review of similar recent studies. Body density and percentage body fat (% BF) were assessed by ADP and HD on the same day in 87 adults aged 18-69 y (41 males and 46 females) and 39 children aged 8-17 y (19 males and 20 females). Differences between measured and predicted thoracic gas volumes determined during the ADP procedure and the resultant effects of those differences on body composition estimates were also compared. In a subset of 50 individuals (31 adults and 19 children), reliability of ADP was measured and the relative ease or difficulty of ADP and HD were probed with a questionnaire. The coefficient of reliability between %BF on day 1 and day 2 was 96.4 in adults and 90.1 in children, and the technical error of measurement of 1.6% in adults and 1.8% in children. Using a predicted rather than a measured thoracic gas volume did not significantly affect percentage body fat estimates in adults, but resulted in overestimates of percentage body fat in children. Mean percentage body fat from ADP was higher than percentage body fat from HD, although this was statistically significant only in adults (29.3 vs 27.7%, P<0.05). The 95% confidence interval of the between-method differences for all subjects was -7 to +9% body fat, and the root mean square error (r.m.s.e.) was approximately 4% body fat. In the subset of individuals who were asked to compare the two methods, 46 out of 50 (92%) indicated that they preferred the ADP to HD. ADP is a reliable method of measuring body composition that subjects found preferable to underwater weighing. However, as shown here and in most other studies, there are differences in percentage body fat estimates assessed by the two methods, perhaps related to body size, age or other factors, that are sufficient to preclude ADP from being used interchangeably with underwater weighing on an individual basis.
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Unforced errors and error reduction in tennis
Brody, H
2006-01-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Aslam, Naveed; Tipu, Muhammad Yasin; Ishaq, Muhammad; Cowling, Ann; McGill, David; Warriach, Hassan Mahmood; Wynn, Peter
2016-01-01
The present study was conducted to observe the seasonal variation in aflatoxin M1 and nutritional quality of milk along informal marketing chains. Milk samples (485) were collected from three different chains over a period of one year. The average concentrations of aflatoxin M1 during the autumn and monsoon seasons (2.60 and 2.59 ppb) were found to be significantly higher (standard error of the difference, SED = 0.21: p = 0.003) than in the summer (1.93 ppb). The percentage of added water in milk was significantly lower (SED = 1.54: p < 0.001) in summer (18.59%) than in the monsoon season (26.39%). There was a significantly different (SED = 2.38: p < 0.001) mean percentage of water added by farmers (6.23%), small collectors (14.97%), large collectors (27.96%) and retailers (34.52%). This was reflected in changes in milk quality along the marketing chain. There was no difference (p = 0.178) in concentration of aflatoxin M1 in milk collected from the farmers (2.12 ppb), small collectors (2.23 ppb), large collectors (2.36 ppb) and retailers (2.58 ppb). The high levels of contamination found in this study, which exceed the standards set by European Union (0.05 ppb) and USFDA (0.5 ppb), demand radical intervention by regulatory authorities and mass awareness of the consequences for consumer health and safety. PMID:27929386
[Pregnant women's food safety and nutritional status in Cartagena, Colombia 2011].
López-Sáleme, Rossana; Díaz-Montes, Carmen E; Bravo-Aljuriz, Leidy; Londoño-Hio, Nataly P; Salguedo-Pájaro, Maireng del Carmen; Camargo-Marín, Casandra C; Osorio-Espitia, Eider
2012-01-01
Establishing an association between food safety and nutritional status in pregnant women in Cartagena. This was a cross-sectional study, using a sample of 413 pregnant women living in urban areas who were affiliated to healthcare-providing companies in Cartagena. A 95 % confidence level, 5 % error and 0.41 prevalence were used. they were stratified by proportional allocation; nutritional status was identified by anthropometric indicators plotted on a Rosso-Mardones nomogram and food safety was determined by a national survey of the situation. Stata 9.2 statistical software was used for a descriptive analysis of the data using frequencies, percentages, averages and standard deviations. The odds ratio (OR)* and p <0.05 significance level were estimated in bivariate analysis. Mean age was 24.3 years-old, 72.2 % were living with a partner and 52 % belonged to stratum 1; it was determined that 70.2 % had food safety. Regarding nutritional status, it was observed that 42 % had maintained appropriate weight during pregnancy. Food safety was not associated with nutritional status (OR 0.8; 0.5-1.3 95 %CI). A high percentage of pregnant women had altered nutritional status, tending towards deficit or towards increase reported as having food safety. This may have been because this study assessed food safety in relation to even though the pregnant women may have had food available, this did not guarantee that they consumed it in suitable quantities and/or quality, such aspects not having been evaluated in this study.
ten Haaf, Twan; Weijs, Peter J. M.
2014-01-01
Introduction Resting energy expenditure (REE) is expected to be higher in athletes because of their relatively high fat free mass (FFM). Therefore, REE predictive equation for recreational athletes may be required. The aim of this study was to validate existing REE predictive equations and to develop a new recreational athlete specific equation. Methods 90 (53M, 37F) adult athletes, exercising on average 9.1±5.0 hours a week and 5.0±1.8 times a week, were included. REE was measured using indirect calorimetry (Vmax Encore n29), FFM and FM were measured using air displacement plethysmography. Multiple linear regression analysis was used to develop a new FFM-based and weight-based REE predictive equation. The percentage accurate predictions (within 10% of measured REE), percentage bias, root mean square error and limits of agreement were calculated. Results The Cunningham equation and the new weight-based equation and the new FFM-based equation performed equally well. De Lorenzo's equation predicted REE less accurate, but better than the other generally used REE predictive equations. Harris-Benedict, WHO, Schofield, Mifflin and Owen all showed less than 50% accuracy. Conclusion For a population of (Dutch) recreational athletes, the REE can accurately be predicted with the existing Cunningham equation. Since body composition measurement is not always possible, and other generally used equations fail, the new weight-based equation is advised for use in sports nutrition. PMID:25275434
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.
Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
1994-01-01
data acquisition systems and run radio, but due to the short wave length the synchronously. light is more sensible to scattering by small Lidar...the raingauge records from the stations Jaen, in the central part of Andalusia, and Badajoz located in this region (Valladolid, Zamora, and Ciudad Real...EN 20-20.0.72H -- CIUDAD REAL Ix ( > to.I to- t-’l PERCENTAGE OF TIME ()PERCENTAGE OF ’TIME() Figure 9. Relative errors for Western Andalusia Figure 10
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia
Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.
2009-01-01
Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107
NASA Astrophysics Data System (ADS)
Gholipour Peyvandi, R.; Islami Rad, S. Z.
2017-12-01
The determination of the volume fraction percentage of the different phases flowing in vessels using transmission gamma rays is a conventional method in petroleum and oil industries. In some cases, with access only to the one side of the vessels, attention was drawn toward backscattered gamma rays as a desirable choice. In this research, the volume fraction percentage was measured precisely in water-gasoil-air three-phase flows by using the backscatter gamma ray technique andthe multilayer perceptron (MLP) neural network. The volume fraction determination in three-phase flows requires two gamma radioactive sources or a dual-energy source (with different energies) while in this study, we used just a 137Cs source (with the single energy) and a NaI detector to analyze backscattered gamma rays. The experimental set-up provides the required data for training and testing the network. Using the presented method, the volume fraction was predicted with a mean relative error percentage less than 6.47%. Also, the root mean square error was calculated as 1.60. The presented set-up is applicable in some industries with limited access. Also, using this technique, the cost, radiation safety and shielding requirements are minimized toward the other proposed methods.
Gebreyesus, G; Lund, M S; Janss, L; Poulsen, N A; Larsen, L B; Bovenhuis, H; Buitenhuis, A J
2016-04-01
Genetic parameters were estimated for the major milk proteins using bivariate and multi-trait models based on genomic relationships between animals. The analyses included, apart from total protein percentage, αS1-casein (CN), αS2-CN, β-CN, κ-CN, α-lactalbumin, and β-lactoglobulin, as well as the posttranslational sub-forms of glycosylated κ-CN and αS1-CN-8P (phosphorylated). Standard errors of the estimates were used to compare the models. In total, 650 Danish Holstein cows across 4 parities and days in milk ranging from 9 to 481d were selected from 21 herds. The multi-trait model generally resulted in lower standard errors of heritability estimates, suggesting that genetic parameters can be estimated with high accuracy using multi-trait analyses with genomic relationships for scarcely recorded traits. The heritability estimates from the multi-trait model ranged from low (0.05 for β-CN) to high (0.78 for κ-CN). Genetic correlations between the milk proteins and the total milk protein percentage were generally low, suggesting the possibility to alter protein composition through selective breeding with little effect on total milk protein percentage. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Post processing of optically recognized text via second order hidden Markov model
NASA Astrophysics Data System (ADS)
Poudel, Srijana
In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.
NASA Astrophysics Data System (ADS)
Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William
2017-10-01
We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2012 CFR
2012-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2014 CFR
2014-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2013 CFR
2013-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
ERIC Educational Resources Information Center
Tekkaya, Ceren
2003-01-01
Investigates the effectiveness of combining conceptual change text and concept mapping strategies on students' understanding of diffusion and osmosis. Results indicate that while the average percentage of students in the experimental group holding a scientifically correct view rose, the percentage of correct responses in the control group…
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
ACT Average Composite by State: 2000 ACT-Tested Graduates.
ERIC Educational Resources Information Center
American Coll. Testing Program, Iowa City, IA.
This table contains average composite scores by state for high school graduates who took the ACT Assessment in 2000. For each state the percentage of graduates taking the ACT Assessment and the average composite score are given, with the same information for those who completed the recommended core curriculum and those who did not, as well as for…
Lysandropoulos, Andreas P; Absil, Julie; Metens, Thierry; Mavroudakis, Nicolas; Guisset, François; Van Vlierberghe, Eline; Smeets, Dirk; David, Philippe; Maertens, Anke; Van Hecke, Wim
2016-02-01
There is emerging evidence that brain atrophy is a part of the pathophysiology of Multiple Sclerosis (MS) and correlates with several clinical outcomes of the disease, both physical and cognitive. Consequently, brain atrophy is becoming an important parameter in patients' follow-up. Since in clinical practice both 1.5Tesla (T) and 3T magnetic resonance imaging (MRI) systems are used for MS patients follow-up, questions arise regarding compatibility and a possible need for standardization. Therefore, in this study 18 MS patients were scanned on the same day on a 1.5T and a 3T scanner. For each scanner, a 3D T1 and a 3D FLAIR were acquired. As no atrophy is expected within 1 day, these datasets can be used to evaluate the median percentage error of the brain volume measurement for gray matter (GM) volume and parenchymal volume (PV) between 1.5T and 3T scanners. The results are obtained with MSmetrix, which is developed especially for use in the MS clinical care path, and compared to Siena (FSL), a widely used software for research purposes. The MSmetrix median percentage error of the brain volume measurement between a 1.5T and a 3T scanner is 0.52% for GM and 0.35% for PV. For Siena this error equals 2.99%. When data of the same scanner are compared, the error is in the order of 0.06-0.08% for both MSmetrix and Siena. MSmetrix appears robust on both the 1.5T and 3T systems and the measurement error becomes an order of magnitude higher between scanners with different field strength.
Achievable accuracy of hip screw holding power estimation by insertion torque measurement.
Erani, Paolo; Baleani, Massimiliano
2018-02-01
To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.
Abdullah, Nasreen; Laing, Robert S; Hariri, Susan; Young, Collette M; Schafer, Sean
2016-04-01
Human papillomavirus (HPV) vaccine should reduce cervical dysplasia before cervical cancer. However, dysplasia diagnosis is screening-dependent. Accurate screening estimates are needed. To estimate the percentage of women in a geographic population that has had cervical cancer screening. We analyzed claims data for (Papanicolau) Pap tests from 2008-2012 to estimate the percentage of insured women aged 18-39 years screened. We estimated screening in uninsured women by dividing the percentage of insured Behavioral Risk Factor Surveillance Survey respondents reporting previous-year testing by the percentage of uninsured respondents reporting previous-year testing, and multiplying this ratio by claims-based estimates of insured women with previous-year screening. We calculated a simple weighted average of the two estimates to estimate overall screening percentage. We estimated credible intervals using Monte-Carlo simulations. During 2008-2012, an annual average of 29.6% of women aged 18-39 years were screened. Screening increased from 2008 to 2009 in all age groups. During 2009-2012, the screening percentages decreased for all groups, but declined most in women aged 18-20 years, from 21.5% to 5.4%. Within age groups, compared to 2009, credible intervals did not overlap during 2011 (except age group 21-29 years) and 2012, and credible intervals in the 18-20 year group did not overlap with older groups in any year. This introduces a novel method to estimate population-level cervical cancer screening. Overall, percentage of women screened in Portland, Oregon fell following changes in screening recommendations released in 2009 and later modified in 2012. Copyright © 2016 Elsevier Ltd. All rights reserved.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Quantification and characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wood, Christopher J.; Gambetta, Jay M.
2018-03-01
We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.
Sullivan, James S.; Ball, Don G.
1997-01-01
The instantaneous V.sub.co signal on a charging capacitor is sampled and the charge voltage on capacitor C.sub.o is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V.sub. co signal is split between a gain stage (G=0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V.sub.co signal is applied to the negative input of a differential amplifier gain stage (G=10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V.sub.co signal from the instantaneous value of sampled V.sub.co signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V.sub.co values squared divided by the total volt-second product of the magnetic compression circuit.
Sullivan, J.S.; Ball, D.G.
1997-09-09
The instantaneous V{sub co} signal on a charging capacitor is sampled and the charge voltage on capacitor C{sub o} is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V{sub co} signal is split between a gain stage (G = 0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V{sub co} signal is applied to the negative input of a differential amplifier gain stage (G = 10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V{sub co} signal from the instantaneous value of sampled V{sub co} signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V{sub co} values squared divided by the total volt-second product of the magnetic compression circuit. 11 figs.
Improved estimation of anomalous diffusion exponents in single-particle tracking experiments
NASA Astrophysics Data System (ADS)
Kepten, Eldad; Bronshtein, Irena; Garini, Yuval
2013-05-01
The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22
the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin
2012-01-01
The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros
2004-01-01
The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.
NASA Astrophysics Data System (ADS)
Santer, B. D.; Mears, C. A.; Gleckler, P. J.; Solomon, S.; Wigley, T.; Arblaster, J.; Cai, W.; Gillett, N. P.; Ivanova, D. P.; Karl, T. R.; Lanzante, J.; Meehl, G. A.; Stott, P.; Taylor, K. E.; Thorne, P.; Wehner, M. F.; Zou, C.
2010-12-01
We perform the most comprehensive comparison to date of simulated and observed temperature trends. Comparisons are made for different latitude bands, timescales, and temperature variables, using information from a multi-model archive and a variety of observational datasets. Our focus is on temperature changes in the lower troposphere (TLT), the mid- to upper troposphere (TMT), and at the sea surface (SST). For SST, TLT, and TMT, trend comparisons over the satellite era (1979 to 2009) always yield closest agreement in mid-latitudes of the Northern Hemisphere. There are pronounced discrepancies in the tropics and in the Southern Hemisphere: in both regions, the multi-model average warming is consistently larger than observed. At high latitudes in the Northern Hemisphere, the observed tropospheric warming exceeds multi-model average trends. The similarity in the latitudinal structure of this discrepancy pattern across different temperature variables and observational data sets suggests that these trend differences are real, and are not due to residual inhomogeneities in the observations. The interpretation of these results is hampered by the fact that the CMIP-3 multi-model archive analyzed here convolves errors in key external forcings with errors in the model response to forcing. Under a "forcing error" interpretation, model-average temperature trends in the Southern Hemisphere extratropics are biased warm because many models neglect (and/or inaccurately specify) changes in stratospheric ozone and the indirect effects of aerosols. An alternative "response error" explanation for the model trend errors is that there are fundamental problems with model clouds and ocean heat uptake over the Southern Ocean. When SST changes are compared over the longer period 1950 to 2009, there is close agreement between simulated and observed trends poleward of 50°S. This result is difficult to reconcile with the hypothesis that the trend discrepancies over 1979 to 2009 are primarily attributable to response errors. Our results suggest that biases in multi-model average temperature trends over the satellite era can be plausibly linked to forcing errors. Better partitioning of the forcing and response components of model errors will require a systematic program of numerical experimentation, with a focus on exploring the climate response to uncertainties in key historical forcings.
Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms
NASA Astrophysics Data System (ADS)
Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor
2015-04-01
For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.
Chen, Zhiqiang; Wang, Hongcheng; Chen, Zhaobo; Ren, Nanqi; Wang, Aijie; Shi, Yue; Li, Xiaoming
2011-01-30
A full-scale test was conducted with an up-flow anaerobic sludge blanket (UASB) pre-treating pharmaceutical wastewater containing 6-aminopenicillanic acid (6-APA) and amoxicillin. The aim of the study is to investigate the performance of UASB in the condition of a high chemical oxygen demand (COD) loading rate from 12.57 to 21.02 kgm(-3)d(-1) and a wide pH from 5.57 to 8.26, in order to provide a reference for treating the similar chemical synthetic pharmaceutical wastewater containing 6-APA and amoxicillin. The results demonstrated that the UASB average percentage reduction in COD, 6-APA and amoxicillin were 52.2%, 26.3% and 21.6%, respectively. In addition, three models, built on the back propagation neural network (BPNN) theory and linear regression techniques were developed for the simulation of the UASB system performance in the biodegradation of pharmaceutical wastewater containing 6-APA and amoxicillin. The average error of COD, 6-APA and amoxicillin were -0.63%, 2.19% and 5.40%, respectively. The results indicated that these models built on the BPNN theory were well-fitted to the detected data, and were able to simulate and predict the removal of COD, 6-APA and amoxicillin by UASB. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.
Converting Multi-Shell and Diffusion Spectrum Imaging to High Angular Resolution Diffusion Imaging
Yeh, Fang-Cheng; Verstynen, Timothy D.
2016-01-01
Multi-shell and diffusion spectrum imaging (DSI) are becoming increasingly popular methods of acquiring diffusion MRI data in a research context. However, single-shell acquisitions, such as diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI), still remain the most common acquisition schemes in practice. Here we tested whether multi-shell and DSI data have conversion flexibility to be interpolated into corresponding HARDI data. We acquired multi-shell and DSI data on both a phantom and in vivo human tissue and converted them to HARDI. The correlation and difference between their diffusion signals, anisotropy values, diffusivity measurements, fiber orientations, connectivity matrices, and network measures were examined. Our analysis result showed that the diffusion signals, anisotropy, diffusivity, and connectivity matrix of the HARDI converted from multi-shell and DSI were highly correlated with those of the HARDI acquired on the MR scanner, with correlation coefficients around 0.8~0.9. The average angular error between converted and original HARDI was 20.7° at voxels with signal-to-noise ratios greater than 5. The network topology measures had less than 2% difference, whereas the average nodal measures had a percentage difference around 4~7%. In general, multi-shell and DSI acquisitions can be converted to their corresponding single-shell HARDI with high fidelity. This supports multi-shell and DSI acquisitions over HARDI acquisition as the scheme of choice for diffusion acquisitions. PMID:27683539
Hysterectomy trends in Australia, 2000-2001 to 2013-2014: joinpoint regression analysis.
Wilson, Louise F; Pandeya, Nirmala; Mishra, Gita D
2017-10-01
Hysterectomy is a common gynecological procedure, particularly in middle and high income countries. The aim of this paper was to describe and examine hysterectomy trends in Australia from 2000-2001 to 2013-2014. For women aged 25 years and over, data on the number of hysterectomies performed in Australia annually were sourced from the National Hospital and Morbidity Database. Age-specific and age-standardized hysterectomy rates per 10 000 women were estimated with adjustment for hysterectomy prevalence in the population. Using joinpoint regression analysis, we estimated the average annual percentage change over the whole study period (2000-2014) and the annual percentage change for each identified trend line segment. A total of 431 162 hysterectomy procedures were performed between 2000-2001 and 2013-2014; an annual average of 30 797 procedures (for women aged 25+ years). The age-standardized hysterectomy rate, adjusted for underlying hysterectomy prevalence, decreased significantly over the whole study period [average annual percentage change -2.8%; 95% confidence interval (CI) -3.5%, -2.2%]. The trend was not linear with one joinpoint detected in 2008-2009. Between 2000-2001 and 2008-2009 there was a significant decrease in incidence (annual percentage change -4.4%; 95% CI -5.2%, -3.7%); from 2008-2009 to 2013-2014 the decrease was minimal and not significantly different from zero (annual percentage change -0.1%; 95% CI -1.7%, 1.5%). A similar change in trend was seen in all age groups. Hysterectomy rates in Australian women aged 25 years and over have declined in the first decade of the 21st century. However, in the last 5 years, rates appear to have stabilized. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.
Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.
2016-09-06
Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Performance of some numerical Laplace inversion methods on American put option formula
NASA Astrophysics Data System (ADS)
Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.
2018-03-01
Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.
... risk for a heart attack. A is for A1C The A1C test tells you your average blood glucose over ... blood glucose may be reported in 2 ways: ■ A1C (as a percentage) ■ estimated Average Glucose (eAG) in ...
NASA Astrophysics Data System (ADS)
Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.
2017-02-01
Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.
Measurement-based analysis of error latency. [in computer operating system
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1987-01-01
This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.
Single-ping ADCP measurements in the Strait of Gibraltar
NASA Astrophysics Data System (ADS)
Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo
2016-04-01
In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.
NASA Astrophysics Data System (ADS)
Heckman, S.
2015-12-01
Modern lightning locating systems (LLS) provide real-time monitoring and early warning of lightningactivities. In addition, LLS provide valuable data for statistical analysis in lightning research. It isimportant to know the performance of such LLS. In the present study, the performance of the EarthNetworks Total Lightning Network (ENTLN) is studied using rocket-triggered lightning data acquired atthe International Center for Lightning Research and Testing (ICLRT), Camp Blanding, Florida.In the present study, 18 flashes triggered at ICLRT in 2014 were analyzed and they comprise of 78negative cloud-to-ground return strokes. The geometric mean, median, minimum, and maximum for thepeak currents of the 78 return strokes are 13.4 kA, 13.6 kA, 3.7 kA, and 38.4 kA, respectively. The peakcurrents represent typical subsequent return strokes in natural cloud-to-ground lightning.Earth Networks has developed a new data processor to improve the performance of their network. Inthis study, results are presented for the ENTLN data using the old processor (originally reported in 2014)and the ENTLN data simulated using the new processor. The flash detection efficiency, stroke detectionefficiency, percentage of misclassification, median location error, median peak current estimation error,and median absolute peak current estimation error for the originally reported data from old processorare 100%, 94%, 49%, 271 m, 5%, and 13%, respectively, and those for the simulated data using the newprocessor are 100%, 99%, 9%, 280 m, 11%, and 15%, respectively. The use of new processor resulted inhigher stroke detection efficiency and lower percentage of misclassification. It is worth noting that theslight differences in median location error, median peak current estimation error, and median absolutepeak current estimation error for the two processors are due to the fact that the new processordetected more number of return strokes than the old processor.
Automated body weight prediction of dairy cows using 3-dimensional vision.
Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S
2018-05-01
The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Medhanyie, Araya Abrha; Spigt, Mark; Yebyo, Henock; Little, Alex; Tadesse, Kidane; Dinant, Geert-Jan; Blanco, Roman
2017-05-01
Mobile phone based applications are considered by many as potentially useful for addressing challenges and improving the quality of data collection in developing countries. Yet very little evidence is available supporting or refuting the potential and widely perceived benefits on the use of electronic forms on smartphones for routine patient data collection by health workers at primary health care facilities. A facility based cross sectional study using a structured paper checklist was prepared to assess the completeness and accuracy of 408 electronic records completed and submitted to a central database server using electronic forms on smartphones by 25 health workers. The 408 electronic records were selected randomly out of a total of 1772 maternal health records submitted by the health workers to the central database over a period of six months. Descriptive frequencies and percentages of data completeness and error rates were calculated. When compared to paper records, the use of electronic forms significantly improved data completeness by 209 (8%) entries. Of a total 2622 entries checked for completeness, 2602 (99.2%) electronic record entries were complete, while 2393 (91.3%) paper record entries were complete. A very small percentage of error rates, which was easily identifiable, occurred in both electronic and paper forms although the error rate in the electronic records was more than double that of paper records (2.8% vs. 1.1%). More than half of entry errors in the electronic records related to entering a text value. With minimal training, supervision, and no incentives, health care workers were able to use electronic forms for patient assessment and routine data collection appropriately and accurately with a very small error rate. Minimising the number of questions requiring text responses in electronic forms would be helpful in minimizing data errors. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhakar, Ramachandran; Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi; Department of Radiology, All India Institute of Medical Sciences, New Delhi
Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL),more » contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.« less
Using video recording to identify management errors in pediatric trauma resuscitation.
Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon
2006-03-01
To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.
The Physiological Profile of Junior Soccer Players at SSBB Surabaya Bhakti
NASA Astrophysics Data System (ADS)
Nashirudin, M.; Kusnanik, N. W.
2018-01-01
Soccer players are required to have good physical fitness in order to achieve optimum accomplishment; physical fitness stands as the foundation of technical and tactical proficiency as well as the mental maturity during the matches. The purpose of this study was to identify the physiological profile of junior soccer players of SSB Surabaya Bhakti age 16-17. The research was conducted at 20 junior soccer players. This research was quantitative with descriptive analysis. Data were collected by testing of physiological (anaerobic power and capacity including explosive leg power, speed, agility; aerobic capacity: cardiovascular endurance). Data was analyzed using percentage. The result showed that the percentage of explosive leg power of junior soccer players were 30% (good category), speed was 85% (average category), right agility was 90% (average category), left agility was 75% (average category). On the other hand, the aerobic power and capacity of the junior soccer players in this study was 50% (average category). The conclusion of this research is that the physiological profile of junior soccer players at SSB Surabaya Bhakti age 16-17 was majority in average category.
Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels
Laenen, Antonius; Curtis, R. E.
1989-01-01
Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)
Comparison of algorithms for automatic border detection of melanoma in dermoscopy images
NASA Astrophysics Data System (ADS)
Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert
2016-09-01
Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.
NASA Astrophysics Data System (ADS)
Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki
2017-04-01
Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.
Spatial Assessment of Model Errors from Four Regression Techniques
Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove
2005-01-01
Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...
Flavour and identification threshold detection overview of Slovak adepts for certified testing.
Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian
2016-07-01
During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.
Consequences of common data analysis inaccuracies in CNS trauma injury basic research.
Burke, Darlene A; Whittemore, Scott R; Magnuson, David S K
2013-05-15
The development of successful treatments for humans after traumatic brain or spinal cord injuries (TBI and SCI, respectively) requires animal research. This effort can be hampered when promising experimental results cannot be replicated because of incorrect data analysis procedures. To identify and hopefully avoid these errors in future studies, the articles in seven journals with the highest number of basic science central nervous system TBI and SCI animal research studies published in 2010 (N=125 articles) were reviewed for their data analysis procedures. After identifying the most common statistical errors, the implications of those findings were demonstrated by reanalyzing previously published data from our laboratories using the identified inappropriate statistical procedures, then comparing the two sets of results. Overall, 70% of the articles contained at least one type of inappropriate statistical procedure. The highest percentage involved incorrect post hoc t-tests (56.4%), followed by inappropriate parametric statistics (analysis of variance and t-test; 37.6%). Repeated Measures analysis was inappropriately missing in 52.0% of all articles and, among those with behavioral assessments, 58% were analyzed incorrectly. Reanalysis of our published data using the most common inappropriate statistical procedures resulted in a 14.1% average increase in significant effects compared to the original results. Specifically, an increase of 15.5% occurred with Independent t-tests and 11.1% after incorrect post hoc t-tests. Utilizing proper statistical procedures can allow more-definitive conclusions, facilitate replicability of research results, and enable more accurate translation of those results to the clinic.
Spot measurement of heart rate based on morphology of PhotoPlethysmoGraphic (PPG) signals.
Madhan Mohan, P; Nagarajan, V; Vignesh, J C
2017-02-01
Due to increasing health consciousness among people, it is imperative to have low-cost health care devices to measure the vital parameters like heart rate and arterial oxygen saturation (SpO 2 ). In this paper, an efficient heart rate monitoring algorithm based on the morphology of photoplethysmography (PPG) signals to measure the spot heart rate (HR) and its real-time implementation is proposed. The algorithm does pre-processing and detects the onsets and systolic peaks of the PPG signal to estimate the heart rate of the subject. Since the algorithm is based on the morphology of the signal, it works well when the subject is not moving, which is a typical test case. So, this algorithm is developed mainly to measure the heart rate at on-demand applications. Real-time experimental results indicate the heart rate accuracy of 99.5%, mean absolute percentage error (MAPE) of 1.65%, mean absolute error (MAE) of 1.18 BPM and reference closeness factor (RCF) of 0.988. The results further show that the average response time of the algorithm to give the spot HR is 6.85 s, so that the users need not wait longer to see their HR. The hardware implementation results show that the algorithm only requires 18 KBytes of total memory and runs at high speed with 0.85 MIPS. So, this algorithm can be targeted to low-cost embedded platforms.
Voice recognition technology implementation in surgical pathology: advantages and limitations.
Singh, Meenakshi; Pal, Timothy R
2011-11-01
Voice recognition technology (VRT) has been in use for medical transcription outside of laboratories for many years, and in recent years it has evolved to a level where it merits consideration by surgical pathologists. To determine the feasibility and impact of making a transition from a transcriptionist-based service to VRT in surgical pathology. We have evaluated VRT in a phased manner for sign out of general and subspecialty surgical pathology cases after conducting a pilot study. We evaluated the effect on turnaround time, workflow, staffing, typographical error rates, and the overall ability of VRT to be adapted for use in surgical pathology. The stepwise implementation of VRT has resulted in real-time sign out of cases and improvement in average turnaround time from 4 to 3 days. The percentage of cases signed out in 1 day improved from 22% to 37%. Amendment rates for typographical errors have decreased. Use of templates and synoptic reports has been facilitated. The transcription staff has been reassigned to other duties and is successfully assisting in other areas. Resident involvement and exposure to complete case sign out has been achieved resulting in a positive impact on resident education. Voice recognition technology allows for a seamless workflow in surgical pathology, with improvements in turnaround time and a positive impact on competency-based resident education. Individual practices may assess the value of VRT and decide to implement it, potentially with gains in many aspects of their practice.
Is there a successful business case for telepharmacy?
Khan, Shamima; Snyder, Herbert W; Rathke, Ann M; Scott, David M; Peterson, Charles D
2008-04-01
The purpose of this study was to assess the financial operation of a Single Business Unit (SBU), consisting of one central retail pharmacy and two remote retail telepharmacies. Analyses of income statements and balance sheets for three consecutive years (2002-2004) were conducted. Several items from these statements were compared to the industry average. Gross profit increased from $260,093 in 2002 to $502,262 in 2004. The net operating income percent was 2.9 percentage points below the industry average in 2002, 3.9 percentage points below in 2003, and 1.3 percentage points above in 2004. The inventory turnover ratio remained consistently below the industry average, but it also increased over the period. This is an area of concern, given the high cost of pharmaceuticals and a higher likelihood of obsolescence that exists with a time-sensitive inventory. Despite these concerns, the overall trend for the SBU is positive. The rate of growth between 2002 and 2004 shows that it is getting close to median sales as reported in the NCPA Digest. The results of this study indicate that multiple locations become profitable when a sufficient volume of patients (sales) is reached, combined with efficient use of the pharmacist's time.
Techniques for small-bone lengthening in congenital anomalies of the hand and foot.
Minguella, J; Cabrera, M; Escolá, J
2001-10-01
The purpose of this study is to analyse three different lengthening techniques used in 31 small bones for congenital malformations of the hand and foot: 15 metacarpals, 12 metatarsals, 1 foot stump and 3 spaces between a previously transplanted phalanx end of the carpus or the metacarpal. Progressive lengthening with an external fixator device was performed in 23 cases: the callus distraction (callotasis) technique was used in 15 cases, whereas in the other 8 cases the speed of lengthening was faster and the defect bridged with a bone graft as a second stage. In another eight cases, a one-stage lengthening was performed. In the callotasis group, the total length gained ranged from 9 mm to 30 mm and the percentage of lengthening obtained (compared with the initial bone length) averaged 53.4%; in the fast lengthening group, the length gained ranged from 8 mm to 15 mm, and the average percentage of lengthening was 53.1%; and in the one-stage group, the length gained ranged from 7 mm to 15 mm, and the average percentage of lengthening was 43%. The overall complication rate was 22.5%.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Government Expenditures on Education as the Percentage of GDP in the EU
ERIC Educational Resources Information Center
Galetic, Fran
2015-01-01
This paper analyzes the government expenditures as the percentage of gross domestic product across countries of the European Union. There is a statistical model based on Z-score, whose aim is to calculate how much each EU country deviates from the average value. The model shows that government expenditures on education vary significantly between…
Bragança, F M; Bosch, S; Voskamp, J P; Marin-Perianu, M; Van der Zwaag, B J; Vernooij, J C M; van Weeren, P R; Back, W
2017-07-01
Inertial measurement unit (IMU) sensor-based techniques are becoming more popular in horses as a tool for objective locomotor assessment. To describe, evaluate and validate a method of stride detection and quantification at walk and trot using distal limb mounted IMU sensors. Prospective validation study comparing IMU sensors and motion capture with force plate data. A total of seven Warmblood horses equipped with metacarpal/metatarsal IMU sensors and reflective markers for motion capture were hand walked and trotted over a force plate. Using four custom built algorithms hoof-on/hoof-off timing over the force plate were calculated for each trial from the IMU data. Accuracy of the computed parameters was calculated as the mean difference in milliseconds between the IMU or motion capture generated data and the data from the force plate, precision as the s.d. of these differences and percentage of error with accuracy of the calculated parameter as a percentage of the force plate stance duration. Accuracy, precision and percentage of error of the best performing IMU algorithm for stance duration at walk were 28.5, 31.6 ms and 3.7% for the forelimbs and -5.5, 20.1 ms and -0.8% for the hindlimbs, respectively. At trot the best performing algorithm achieved accuracy, precision and percentage of error of -27.6/8.8 ms/-8.4% for the forelimbs and 6.3/33.5 ms/9.1% for the hindlimbs. The described algorithms have not been assessed on different surfaces. Inertial measurement unit technology can be used to determine temporal kinematic stride variables at walk and trot justifying its use in gait and performance analysis. However, precision of the method may not be sufficient to detect all possible lameness-related changes. These data seem promising enough to warrant further research to evaluate whether this approach will be useful for appraising the majority of clinically relevant gait changes encountered in practice. © 2016 The Authors. Equine Veterinary Journal published by John Wiley & Sons Ltd on behalf of EVJ Ltd.
Forecasting incidence of hemorrhagic fever with renal syndrome in China using ARIMA model
2011-01-01
Background China is a country that is most seriously affected by hemorrhagic fever with renal syndrome (HFRS) with 90% of HFRS cases reported globally. At present, HFRS is getting worse with increasing cases and natural foci in China. Therefore, there is an urgent need for monitoring and predicting HFRS incidence to make the control of HFRS more effective. In this study, we applied a stochastic autoregressive integrated moving average (ARIMA) model with the objective of monitoring and short-term forecasting HFRS incidence in China. Methods Chinese HFRS data from 1975 to 2008 were used to fit ARIMA model. Akaike Information Criterion (AIC) and Ljung-Box test were used to evaluate the constructed models. Subsequently, the fitted ARIMA model was applied to obtain the fitted HFRS incidence from 1978 to 2008 and contrast with corresponding observed values. To assess the validity of the proposed model, the mean absolute percentage error (MAPE) between the observed and fitted HFRS incidence (1978-2008) was calculated. Finally, the fitted ARIMA model was used to forecast the incidence of HFRS of the years 2009 to 2011. All analyses were performed using SAS9.1 with a significant level of p < 0.05. Results The goodness-of-fit test of the optimum ARIMA (0,3,1) model showed non-significant autocorrelations in the residuals of the model (Ljung-Box Q statistic = 5.95,P = 0.3113). The fitted values made by ARIMA (0,3,1) model for years 1978-2008 closely followed the observed values for the same years, with a mean absolute percentage error (MAPE) of 12.20%. The forecast values from 2009 to 2011 were 0.69, 0.86, and 1.21per 100,000 population, respectively. Conclusion ARIMA models applied to historical HFRS incidence data are an important tool for HFRS surveillance in China. This study shows that accurate forecasting of the HFRS incidence is possible using an ARIMA model. If predicted values from this study are accurate, China can expect a rise in HFRS incidence. PMID:21838933
Manguin, Sylvie; Foumane, Vincent; Besnard, Patrick; Fortes, Filomeno; Carnevale, Pierre
2017-07-01
Microscopic blood smear examinations done in health centers of Angola demonstrated a large overdiagnosis of malaria cases with an average rate of errors as high as 85%. Overall 83% of patients who received Coartem ® had an inappropriate treatment. Overestimated malaria diagnosis was noticed even when specific symptoms were part of the clinical observation, antimalarial treatments being subsequently given. Then, malaria overdiagnosis has three main consequences, (i) the lack of data reliability is of great concern, impeding epidemiological records and evaluation of the actual influence of operations as scheduled by the National Malaria Control Programme; (ii) the large misuse of antimalarial drug can increase the selective pressure for resistant strain and can make a false consideration of drug resistant P. falciparum crisis; and (iii) the need of strengthening national health centers in term of human, with training in microscopy, and equipment resources to improve malaria diagnosis with a large scale use of rapid diagnostic tests associated with thick blood smears, backed up by a "quality control" developed by the national health authorities. Monitoring of malaria cases was done in three Angolan health centers of Alto Liro (Lobito town) and neighbor villages of Cambambi and Asseque (Benguéla Province) to evaluate the real burden of malaria. Carriers of Plasmodium among patients of newly-borne to 14 years old, with or without fever, were analyzed and compared to presumptive malaria cases diagnosed in these health centers. Presumptive malaria cases were diagnosed six times more than the positive thick blood smears done on the same children. In Alto Liro health center, the percentage of diagnosis error reached 98%, while in Cambambi and Asseque it was of 79% and 78% respectively. The percentage of confirmed malaria cases was significantly higher during the dry (20.2%) than the rainy (13.2%) season. These observations in three peripheral health centers confirmed what has already been noticed in other malaria endemic regions, and highlight the need for an accurate evaluation of the Malaria control programme implemented in Angola. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparability of [0-Level] GCE Grades in 1968 and 1973.
ERIC Educational Resources Information Center
Backhouse, John K.
1978-01-01
Willmott's comparison of General Certificate of Education (GCE) scores in 1968 and 1973 is reexamined. The trend toward an increasing percentage of students who pass is confirmed, but estimates of standard errors indicate that subtest differences may be attributed to the sampling plan. (CP)
Enlistment Early Warning System and Accession Crisis Prevention Process. Volumes 4, 5, 6, and 7.
1984-06-15
8217 . .- .. .- : . .". - :- ’. . : -" -- . ", . .- . " . ; - "-" .- : ".i .- " " .’-’-’’".:. , , . FXHIBIT 4.8 EJNF-r MOYMff FORE TING ACCURACY S[L*M Y Apr il-December 1983 ,. XROOT MEAN SQUARE PERCENTAGE ERROR TOTAL
Eigenvector method for umbrella sampling enables error analysis
Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.
2016-01-01
Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912
Michael, Claire W; Naik, Kalyani; McVicker, Michael
2013-05-01
We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J; Shi, W; Andrews, D
2016-06-15
Purpose: To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac x-ray imaging systems for cranial radiotherapy. Method: Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (Version 2.5), which is integrated with a BrainLab ExacTrac imaging system (Version 6.1.1). The phantom study was based on a Rando head phantom, which was designed to evaluate isocenter-location dependence of the image registrations. Ten isocenters were selected at various locations in the phantom, which represented clinical treatment sites. CBCT and ExacTrac x-ray images were taken when the phantom was located at each isocenter. The patientmore » study included thirteen patients. CBCT and ExacTrac x-ray images were taken at each patient’s treatment position. Six-dimensional image registrations were performed on CBCT and ExacTrac, and residual errors calculated from CBCT and ExacTrac were compared. Results: In the phantom study, the average residual-error differences between CBCT and ExacTrac image registrations were: 0.16±0.10 mm, 0.35±0.20 mm, and 0.21±0.15 mm, in the vertical, longitudinal, and lateral directions, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.36±0.11 degree, 0.14±0.10 degree, and 0.12±0.10 degree, respectively. In the patient study, the average residual-error differences in the vertical, longitudinal, and lateral directions were: 0.13±0.13 mm, 0.37±0.21 mm, 0.22±0.17 mm, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.30±0.10 degree, 0.18±0.11 degree, and 0.22±0.13 degree, respectively. Larger residual-error differences (up to 0.79 mm) were observed in the longitudinal direction in the phantom and patient studies where isocenters were located in or close to frontal lobes, i.e., located superficially. Conclusion: Overall, the average residual-error differences were within 0.4 mm in the translational directions and were within 0.4 degree in the rotational directions.« less
Study of 1-min rain rate integration statistic in South Korea
NASA Astrophysics Data System (ADS)
Shrestha, Sujan; Choi, Dong-You
2017-03-01
The design of millimeter wave communication links and the study of propagation impairments at higher frequencies due to a hydrometeor, particularly rain, require the knowledge of 1-min. rainfall rate data. Signal attenuation in space communication results are due to absorption and scattering of radio wave energy. Radio wave attenuation due to rain depends on the relevance of a 1-min. integration time for the rain rate. However, in practice, securing these data over a wide range of areas is difficult. Long term precipitation data are readily available. However, there is a need for a 1-min. rainfall rate in the rain attenuation prediction models for a better estimation of the attenuation. In this paper, we classify and survey the prominent 1-min. rain rate models. Regression analysis was performed for the study of cumulative rainfall data measured experimentally for a decade in nine different regions in South Korea, with 93 different locations, using the experimental 1-min. rainfall accumulation. To visualize the 1-min. rainfall rate applicable for the whole region for 0.01% of the time, we have considered the variation in the rain rate for 40 stations across South Korea. The Kriging interpolation method was used for spatial interpolation of the rain rate values for 0.01% of the time into a regular grid to obtain a highly consistent and predictable rainfall variation. The rain rate exceeded the 1-min. interval that was measured through the rain gauge compared to the rainfall data estimated using the International Telecommunication Union Radio Communication Sector model (ITU-R P.837-6) along with the empirical methods as Segal, Burgueno et al., Chebil and Rahman, logarithmic, exponential and global coefficients, second and third order polynomial fits, and Model 1 for Icheon regions under the regional and average coefficient set. The ITU-R P. 837-6 exhibits a lower relative error percentage of 3.32% and 12.59% in the 5- and 10-min. to 1-min. conversion, whereas the higher error percentages of 24.64%, 46.44% and 58.46% for the 20-, 30- and 60-min. to 1-min., conversion were obtained in the Icheon region. The available experimental rainfall data were sampled on equiprobable rain-rate values where the application of these models to experimentally obtained data exhibits a variable error rate. This paper aims to provide a better survey of various conversion methods to model a 1-min. rain rate applicable to the South Korea regions with a suitable contour plot at 0.01% of the time.
Rosso, Nicholas; Giabbanelli, Philippe
2018-05-30
National surveys in public health nutrition commonly record the weight of every food consumed by an individual. However, if the goal is to identify whether individuals are in compliance with the 5 main national nutritional guidelines (sodium, saturated fats, sugars, fruit and vegetables, and fats), much less information may be needed. A previous study showed that tracking only 2.89% of all foods (113/3911) was sufficient to accurately identify compliance. Further reducing the data needs could lower participation burden, thus decreasing the costs for monitoring national compliance with key guidelines. This study aimed to assess whether national public health nutrition surveys can be further simplified by only recording whether a food was consumed, rather than having to weigh it. Our dataset came from a generalized sample of inhabitants in the United Kingdom, more specifically from the National Diet and Nutrition Survey 2008-2012. After simplifying food consumptions to a binary value (1 if an individual consumed a food and 0 otherwise), we built and optimized decision trees to find whether the foods could accurately predict compliance with the major 5 nutritional guidelines. When using decision trees of a similar size to previous studies (ie, involving as many foods), we were able to correctly infer compliance for the 5 guidelines with an average accuracy of 80.1%. This is an average increase of 2.5 percentage points over a previous study, showing that further simplifying the surveys can actually yield more robust estimates. When we allowed the new decision trees to use slightly more foods than in previous studies, we were able to optimize the performance with an average increase of 3.1 percentage points. Although one may expect a further simplification of surveys to decrease accuracy, our study found that public health dietary surveys can be simplified (from accurately weighing items to simply checking whether they were consumed) while improving accuracy. One possibility is that the simplification reduced noise and made it easier for patterns to emerge. Using simplified surveys will allow to monitor public health nutrition in a more cost-effective manner and possibly decrease the number of errors as participation burden is reduced. ©Nicholas Rosso, Philippe Giabbanelli. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 30.05.2018.
Stable estimate of primary OC/EC ratios in the EC tracer method
NASA Astrophysics Data System (ADS)
Chu, Shao-Hang
In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.
An empirical model for estimating solar radiation in the Algerian Sahara
NASA Astrophysics Data System (ADS)
Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous
2018-05-01
The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.
NASA Astrophysics Data System (ADS)
Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.
2014-12-01
Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.
Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse
2012-05-01
Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
NASA Astrophysics Data System (ADS)
Chandrasekaran, Muthumari; Tamang, Santosh
2017-08-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
Forecasting incidence of dengue in Rajasthan, using time series analyses.
Bhatnagar, Sunil; Lal, Vivek; Gupta, Shiv D; Gupta, Om P
2012-01-01
To develop a prediction model for dengue fever/dengue haemorrhagic fever (DF/DHF) using time series data over the past decade in Rajasthan and to forecast monthly DF/DHF incidence for 2011. Seasonal autoregressive integrated moving average (SARIMA) model was used for statistical modeling. During January 2001 to December 2010, the reported DF/DHF cases showed a cyclical pattern with seasonal variation. SARIMA (0,0,1) (0,1,1) 12 model had the lowest normalized Bayesian information criteria (BIC) of 9.426 and mean absolute percentage error (MAPE) of 263.361 and appeared to be the best model. The proportion of variance explained by the model was 54.3%. Adequacy of the model was established through Ljung-Box test (Q statistic 4.910 and P-value 0.996), which showed no significant correlation between residuals at different lag times. The forecast for the year 2011 showed a seasonal peak in the month of October with an estimated 546 cases. Application of SARIMA model may be useful for forecast of cases and impending outbreaks of DF/DHF and other infectious diseases, which exhibit seasonal pattern.
New developments in supra-threshold perimetry.
Henson, David B; Artes, Paul H
2002-09-01
To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.
Multi-step prediction for influenza outbreak by an adjusted long short-term memory.
Zhang, J; Nawata, K
2018-05-01
Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all <15%, averagely 12.930%. To the best of our knowledge, it is the first time that LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.
Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure
NASA Astrophysics Data System (ADS)
Santoso, D. A.; Farid, A.; Ulum, B.
2017-06-01
Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.
The precision of a special purpose analog computer in clinical cardiac output determination.
Sullivan, F J; Mroz, E A; Miller, R E
1975-01-01
Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394
SU-E-P-49: Evaluation of Image Quality and Radiation Dose of Various Unenhanced Head CT Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Khan, M; Alapati, K
2015-06-15
Purpose: To evaluate the diagnostic value of various unenhanced head CT protocols and predicate acceptable radiation dose level for head CT exam. Methods: Our retrospective analysis included 3 groups, 20 patients per group, who underwent clinical routine unenhanced adult head CT examination. All exams were performed axially with 120 kVp. Three protocols, 380 mAs without iterative reconstruction and automAs, 340 mAs with iterative reconstruction without automAs, 340 mAs with iterative reconstruction and automAs, were applied on each group patients respectively. The images were reconstructed with H30, J30 for brain window and H60, J70 for bone window. Images acquired with threemore » protocols were randomized and blindly reviewed by three radiologists. A 5 point scale was used to rate each exam The percentage of exam score above 3 and average scores of each protocol were calculated for each reviewer and tissue types. Results: For protocols without automAs, the average scores of bone window with iterative reconstruction were higher than those without iterative reconstruction for each reviewer although the radiation dose was 10 percentage lower. 100 percentage exams were scored 3 or higher and the average scores were above 4 for both brain and bone reconstructions. The CTDIvols are 64.4 and 57.8 mGy of 380 and 340 mAs, respectively. With automAs, the radiation dose varied with head size, resulting in 47.5 mGy average CTDIvol between 39.5 and 56.5 mGy. 93 and 98 percentage exams were scored great than 3 for brain and bone windows, respectively. The diagnostic confidence level and image quality of exams with AutomAs were less than those without AutomAs for each reviewer. Conclusion: According to these results, the mAs was reduced to 300 with automAs OFF for head CT exam. The radiation dose was 20 percentage lower than the original protocol and the CTDIvol was reduced to 51.2 mGy.« less
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
ERIC Educational Resources Information Center
Birjandi, Parviz; Siyyari, Masood
2016-01-01
This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…
The Accuracy of Aggregate Student Growth Percentiles as Indicators of Educator Performance
ERIC Educational Resources Information Center
Castellano, Katherine E.; McCaffrey, Daniel F.
2017-01-01
Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…
ERIC Educational Resources Information Center
Schumacher, Robin F.; Malone, Amelia S.
2017-01-01
The goal of this study was to describe fraction-calculation errors among fourth-grade students and to determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low-, average-, or high-achieving). We…
Retrieval of the aerosol optical thickness from UV global irradiance measurements
NASA Astrophysics Data System (ADS)
Costa, M. J.; Salgueiro, V.; Bortoli, D.; Obregón, M. A.; Antón, M.; Silva, A. M.
2015-12-01
The UV irradiance is measured at Évora since several years, where a CIMEL sunphotometer integrated in AERONET is also installed. In the present work, measurements of UVA (315 - 400 nm) irradiances taken with Kipp&Zonen radiometers, as well as satellite data of ozone total column values, are used in combination with radiative transfer calculations, to estimate the aerosol optical thickness (AOT) in the UV. The retrieved UV AOT in Évora is compared with AERONET AOT (at 340 and 380 nm) and a fairly good agreement is found with a root mean square error of 0.05 (normalized root mean square error of 8.3%) and a mean absolute error of 0.04 (mean percentage error of 2.9%). The methodology is then used to estimate the UV AOT in Sines, an industrialized site on the Atlantic western coast, where the UV irradiance is monitored since 2013 but no aerosol information is available.
Examining Impulse-Variability in Kicking.
Chappell, Andrew; Molina, Sergio L; McKibben, Jonathon; Stodden, David F
2016-07-01
This study examined variability in kicking speed and spatial accuracy to test the impulse-variability theory prediction of an inverted-U function and the speed-accuracy trade-off. Twenty-eight 18- to 25-year-old adults kicked a playground ball at various percentages (50-100%) of their maximum speed at a wall target. Speed variability and spatial error were analyzed using repeated-measures ANOVA with built-in polynomial contrasts. Results indicated a significant inverse linear trajectory for speed variability (p < .001, η2= .345) where 50% and 60% maximum speed had significantly higher variability than the 100% condition. A significant quadratic fit was found for spatial error scores of mean radial error (p < .0001, η2 = .474) and subject-centroid radial error (p < .0001, η2 = .453). Findings suggest variability and accuracy of multijoint, ballistic skill performance may not follow the general principles of impulse-variability theory or the speed-accuracy trade-off.
ERIC Educational Resources Information Center
Huprich, Julia; Green, Ravonne
2007-01-01
The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…
SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platt, M; Platt, M; Lamba, M
2016-06-15
Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less
A mathematical approach to beam matching
Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N
2013-01-01
Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874
Fukuda, David H; Wray, Mandy E; Kendall, Kristina L; Smith-Ryan, Abbie E; Stout, Jeffrey R
2017-07-01
This investigation aimed to compare hydrostatic weighing (HW) with near-infrared interactance (NIR) and skinfold measurements (SKF) in estimating body fat percentage (FAT%) in rowing athletes. FAT% was estimated in 20 elite male rowers (mean ± SD: age = 24·8 ± 2·2 years, height = 191·0 ± 6·8 cm, weight = 86·8 ± 11·3 kg, HW FAT% = 11·50 ± 3·16%) using HW with residual volume, 3-site SKF and NIR on the biceps brachii. Predicted FAT% values for NIR and SKF were validated against the criterion method of HW. Constant error was not significant for NIR (-0·06, P = 0·955) or SKF (-0·20, P = 0·813). Neither NIR (r = 0·045) nor SKF (r = 0·229) demonstrated significant validity coefficients when compared to HW. The standard error of the estimate values for NIR and SKF were both less than 3·5%, while total error was 4·34% and 3·60%, respectively. When compared to HW, SKF and NIR provide similar mean values when compared to HW, but the lack of apparent relationships between individual values and borderline unacceptable total error may limit their application in this population. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Factors Influencing Consonant Acquisition in Brazilian Portuguese-Speaking Children
ERIC Educational Resources Information Center
Ceron, Marizete Ilha; Gubiani, Marileda Barichello; de Oliveira, Camila Rosa; Keske-Soares, Márcia
2017-01-01
Purpose: We sought to provide valid and reliable data on the acquisition of consonant sounds in speakers of Brazilian Portuguese. Method: The sample comprised 733 typically developing monolingual speakers of Brazilian Portuguese (ages 3;0-8;11 [years;months]). The presence of surface speech error patterns, the revised percentage consonants…
NASA Astrophysics Data System (ADS)
Strojnik, Marija; Páez, Gonzalo; Granados, Juan C.
2006-08-01
We determine the temperature distribution within the flame as a function of position. We determined temperature distribution and the length of a flame by dual-wavelength thermometry, at 470 nm and 515 nm. The error percentages on the temperature and the flame length measurements are 1.9% as compared with the predicted thermodynamic results.
NASA Astrophysics Data System (ADS)
Patel, Vinay Kumar; Chauhan, Shivani; Katiyar, Jitendra Kumar
2018-04-01
In this study, a novel natural fiber i.e. Sour-weed botanically known as ‘Rumex acetosella’ has been first time introduced as natural reinforcements to polyester matrix. The natural fiber based polyester composites were fabricated by hand lay-up technique using different sizes and different weight percentages. In Sour-weed/Polyester composites, physical (density, water absorption and hardness), mechanical properties (tensile and impact properties) and wear properties (sand abrasion and sliding wear) were investigated for different sizes of sour weed of 0.6 mm, 5 mm, 10 mm, 15 mm and 20 mm at 3, 6 and 9 weight percent loading, respectively in polyester matrix. Furthermore, on average value of results, the multi-criteria optimization technique i.e. TOPSIS was employed to decide the ranking of the composites. From the optimized results, it was observed that Sour-weed composite reinforced with fiber’s size of 15 mm at 6 wt% loading demonstrated the best ranked composite exhibiting best overall properties as average tensile strength of 34.33 MPa, average impact strength of 10 Joule, average hardness of 12 Hv, average specific sand abrasion wear rate of 0.0607 mm3 N‑1m‑1, average specific sliding wear rate of 0.002 90 mm3 N‑1m‑1, average percentage of water absorption of 3.446% and average density of 1.013 among all fabricated composites.
Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.
1985-01-01
The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1983-01-01
Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
A Case Study of Women Presidents of Texas Private Colleges and Universities and Their Followership
ERIC Educational Resources Information Center
Gregory, Shelley E.
2016-01-01
The purpose of this current case study was to document how women presidents of private Texas colleges and universities describe their leadership. The percentage of Texas women presidents (24.3%) closely mirrors the U.S. national average (26.4%) of women presidents of colleges and universities, yet the percentage of Texas women presidents of…
ERIC Educational Resources Information Center
Morgan, Ali Zaremba; Keiley, Margaret K.; Ryan, Aubrey E.; Radomski, Juliana Groves; Gropper, Sareen S.; Connell, Lenda Jo; Simmons, Karla P.; Ulrich, Pamela V.
2012-01-01
Obesity and high body fat percentages are a major public health issue. The percentage of obese and overweight Americans has increased over the past 30 years. On average, overweight individuals with higher percent body fat than normal weight individuals are at increased risk for numerous negative outcomes both physically and mentally. A prime time…
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
NASA Astrophysics Data System (ADS)
Woodford, Curtis; Yartsev, Slav; Van Dyk, Jake
2007-08-01
This study aims to investigate the settings that provide optimum registration accuracy when registering megavoltage CT (MVCT) studies acquired on tomotherapy with planning kilovoltage CT (kVCT) studies of patients with lung cancer. For each experiment, the systematic difference between the actual and planned positions of the thorax phantom was determined by setting the phantom up at the planning isocenter, generating and registering an MVCT study. The phantom was translated by 5 or 10 mm, MVCT scanned, and registration was performed again. A root-mean-square equation that calculated the residual error of the registration based on the known shift and systematic difference was used to assess the accuracy of the registration process. The phantom study results for 18 combinations of different MVCT/kVCT registration options are presented and compared to clinical registration data from 17 lung cancer patients. MVCT studies acquired with coarse (6 mm), normal (4 mm) and fine (2 mm) slice spacings could all be registered with similar residual errors. No specific combination of resolution and fusion selection technique resulted in a lower residual error. A scan length of 6 cm with any slice spacing registered with the full image fusion selection technique and fine resolution will result in a low residual error most of the time. On average, large corrections made manually by clinicians to the automatic registration values are infrequent. Small manual corrections within the residual error averages of the registration process occur, but their impact on the average patient position is small. Registrations using the full image fusion selection technique and fine resolution of 6 cm MVCT scans with coarse slices have a low residual error, and this strategy can be clinically used for lung cancer patients treated on tomotherapy. Automatic registration values are accurate on average, and a quick verification on a sagittal MVCT slice should be enough to detect registration outliers.
Zonal average earth radiation budget measurements from satellites for climate studies
NASA Technical Reports Server (NTRS)
Ellis, J. S.; Haar, T. H. V.
1976-01-01
Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.
Effect of satellite formations and imaging modes on global albedo estimation
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.
2016-05-01
We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.
26 CFR 1.401(l)-1 - Permitted disparity in employer-provided contributions or benefits.
Code of Federal Regulations, 2014 CFR
2014-04-01
... with respect to an employee's average annual compensation at or below the integration level (expressed... or below the integration level (expressed as a percentage of such plan year compensation). (5... plan with respect to an employee's average annual compensation above the integration level (expressed...
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2015-04-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).
Percentage entrainment of constituent loads in urban runoff, south Florida
Miller, R.A.
1985-01-01
Runoff quantity and quality data from four urban basins in south Florida were analyzed to determine the entrainment of total nitrogen, total phosphorus, total carbon, chemical oxygen demand, suspended solids, and total lead within the stormwater runoff. Land use of the homogeneously developed basins are residential (single family), highway, commercial, and apartment (multifamily). A computational procedure was used to calculate, for all storms that had water-quality data, the percentage of constituent load entrainment in specified depths of runoff. The plot of percentage of constituent load entrained as a function of runoff is termed the percentage-entrainment curve. Percentage-entrainment curves were developed for three different source areas of basin runoff: (1) the hydraulically effective impervious area, (2) the contributing area, and (3) the drainage area. With basin runoff expressed in inches over the contributing area, the depth of runoff required to remove 90 percent of the constituent load ranged from about 0.4 inch to about 1.4 inches; and to remove 80 percent, from about 0.3 to 0.9 inch. Analysis of variance, using depth of runoff from the contributing area as the response variable, showed that the factor 'basin' is statistically significant, but that the factor 'constituent' is not statistically significant in the forming of the percentage-entrainment curve. Evidently the sewerage design, whether elongated or concise in plan dictates the shape of the percentage-entrainment curve. The percentage-entrainment curves for all constituents were averaged for each basin and plotted against basin runoff for three source areas of runoff-the hydraulically effective impervious area, the contributing area, and the drainage area. The relative positions of the three curves are directly related to the relative sizes of the three source areas considered. One general percentage-entrainment curve based on runoff from the contributing area was formed by averaging across both constituents and basins. Its coordinates are: 0.25 inch of runoff for 50-percent entrainment, 0.65 inch of runoff for 80-percent entrainment, and 0.95 inch of runoff for 90-percent entrainment. The general percentage-entrainment curve based on runoff from the hydraulically effective impervious area has runoff values of 0.35, 0.95, 1.6 inches, respectively.
Error in telemetry studies: Effects of animal movement on triangulation
Schmutz, Joel A.; White, Gary C.
1990-01-01
We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.
The Affordable Care Act versus Medicare for All.
Seidman, Laurence
2015-08-01
Many problems facing the Affordable Care Act would disappear if the nation were instead implementing Medicare for All - the extension of Medicare to every age group. Every American would be automatically covered for life. Premiums would be replaced with a set of Medicare taxes. There would be no patient cost sharing. Individuals would have free choice of doctors. Medicare's single-payer bargaining power would slow price increases and reduce medical cost as a percentage of gross domestic product (GDP). Taxes as a percentage of GDP would rise from below average to average for economically advanced nations. Medicare for All would be phased in by age. Copyright © 2015 by Duke University Press.
Ortiz-Hernández, Luis; Vega López, A Valeria; Ramos-Ibáñez, Norma; Cázares Lara, L Joana; Medina Gómez, R Joab; Pérez-Salgado, Diana
To develop and validate equations to estimate the percentage of body fat of children and adolescents from Mexico using anthropometric measurements. A cross-sectional study was carried out with 601 children and adolescents from Mexico aged 5-19 years. The participants were randomly divided into the following two groups: the development sample (n=398) and the validation sample (n=203). The validity of previously published equations (e.g., Slaughter) was also assessed. The percentage of body fat was estimated by dual-energy X-ray absorptiometry. The anthropometric measurements included height, sitting height, weight, waist and arm circumferences, skinfolds (triceps, biceps, subscapular, supra-iliac, and calf), and elbow and bitrochanteric breadth. Linear regression models were estimated with the percentage of body fat as the dependent variable and the anthropometric measurements as the independent variables. Equations were created based on combinations of six to nine anthropometric variables and had coefficients of determination (r 2 ) equal to or higher than 92.4% for boys and 85.8% for girls. In the validation sample, the developed equations had high r 2 values (≥85.6% in boys and ≥78.1% in girls) in all age groups, low standard errors (SE≤3.05% in boys and ≤3.52% in girls), and the intercepts were not different from the origin (p>0.050). Using the previously published equations, the coefficients of determination were lower, and/or the intercepts were different from the origin. The equations developed in this study can be used to assess the percentage of body fat of Mexican schoolchildren and adolescents, as they demonstrate greater validity and lower error compared with previously published equations. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
A comparison of hepatic segmental anatomy as revealed by cross-sections and MPR CT imaging.
Liu, Xue-Jing; Zhang, Jian-Fei; Sui, Hong-Jin; Yu, Sheng-Bo; Gong, Jin; Liu, Jie; Wu, Le-Bin; Liu, Cheng; Bai, Jian; Shi, Bing-Yi
2013-05-01
To compare the areas of human liver horizontal sections with computed tomography (CT) images and to evaluate whether the subsegments determined by CT are consistent with the actual anatomy. Six human cadaver livers were made into horizontal slices with multislice spiral CT three-dimensional (3D) reconstruction was used during infusion process. Each liver segment was displayed using different color, and 3D images of the portal and hepatic vein were reconstructed. Each segmental area was measured on CT-reconstructed images, which were compared with the actual area on the sections of the same liver. The measurements were performed at four key levels namely: (1) the three hepatic veins, (2) the left, and (3) the right branch of portal vein (PV), and (4) caudal to the bifurcation of the PV. By dividing the sum of these areas by the total area of the liver, the authors got the percentage of the incorrectly determined subsegmental areas. In addition to these percentage values, the maximum distances of the radiologically determined intersegmental boundaries from the true anatomic boundaries were measured. On the four key levels, an average of 28.64 ± 10.26% of the hepatic area of CT images was attributed to an incorrect segment. The mean-maximum error between artificial segments on images and actual anatomical segments was 3.81 ± 1.37 cm. The correlation between radiological segmenting method and actual anatomy was poor. The hepatic segments being divided strictly according to the branching point of the PV could be more informative during liver segmental resection. Copyright © 2012 Wiley Periodicals, Inc.
Digitization of Electrocardiogram From Telemetry Prior to In-hospital Cardiac Arrest: A Pilot Study.
Attin, Mina; Wang, Lu; Soroushmehr, S M Reza; Lin, Chii-Dean; Lemus, Hector; Spadafore, Maxwell; Najarian, Kayvan
2016-03-01
Analyzing telemetry electrocardiogram (ECG) data over an extended period is often time-consuming because digital records are not widely available at hospitals. Investigating trends and patterns in the ECG data could lead to establishing predictors that would shorten response time to in-hospital cardiac arrest (I-HCA). This study was conducted to validate a novel method of digitizing paper ECG tracings from telemetry systems in order to facilitate the use of heart rate as a diagnostic feature prior to I-HCA. This multicenter study used telemetry to investigate full-disclosure ECG papers of 44 cardiovascular patients obtained within 1 hr of I-HCA with initial rhythms of pulseless electrical activity and asystole. Digital ECGs were available for seven of these patients. An algorithm to digitize the full-disclosure ECG papers was developed using the shortest path method. The heart rate was measured manually (averaging R-R intervals) for ECG papers and automatically for digitized and digital ECGs. Significant correlations were found between manual and automated measurements of digitized ECGs (p < .001) and between digitized and digital ECGs (p < .001). Bland-Altman methods showed bias = .001 s, SD = .0276 s, lower and upper 95% limits of agreement for digitized and digital ECGs = .055 and -.053 s, and percentage error = 0.22%. Root mean square (rms), percentage rms difference, and signal to noise ratio values were in acceptable ranges. The digitization method was validated. Digitized ECG provides an efficient and accurate way of measuring heart rate over an extended period of time. © The Author(s) 2015.
DNA-DNA hybridization values and their relationship to whole-genome sequence similarities.
Goris, Johan; Konstantinidis, Konstantinos T; Klappenbach, Joel A; Coenye, Tom; Vandamme, Peter; Tiedje, James M
2007-01-01
DNA-DNA hybridization (DDH) values have been used by bacterial taxonomists since the 1960s to determine relatedness between strains and are still the most important criterion in the delineation of bacterial species. Since the extent of hybridization between a pair of strains is ultimately governed by their respective genomic sequences, we examined the quantitative relationship between DDH values and genome sequence-derived parameters, such as the average nucleotide identity (ANI) of common genes and the percentage of conserved DNA. A total of 124 DDH values were determined for 28 strains for which genome sequences were available. The strains belong to six important and diverse groups of bacteria for which the intra-group 16S rRNA gene sequence identity was greater than 94 %. The results revealed a close relationship between DDH values and ANI and between DNA-DNA hybridization and the percentage of conserved DNA for each pair of strains. The recommended cut-off point of 70 % DDH for species delineation corresponded to 95 % ANI and 69 % conserved DNA. When the analysis was restricted to the protein-coding portion of the genome, 70 % DDH corresponded to 85 % conserved genes for a pair of strains. These results reveal extensive gene diversity within the current concept of "species". Examination of reciprocal values indicated that the level of experimental error associated with the DDH method is too high to reveal the subtle differences in genome size among the strains sampled. It is concluded that ANI can accurately replace DDH values for strains for which genome sequences are available.
Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data
NASA Technical Reports Server (NTRS)
Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen
2006-01-01
This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Shields, Richard K.; Dudley-Javoroski, Shauna; Boaldin, Kathryn M.; Corey, Trent A.; Fog, Daniel B.; Ruen, Jacquelyn M.
2012-01-01
Objectives To determine (1) the error attributable to external tibia-length measurements by using peripheral quantitative computed tomography (pQCT) and (2) the effect these errors have on scan location and tibia trabecular bone mineral density (BMD) after spinal cord injury (SCI). Design Blinded comparison and criterion standard in matched cohorts. Setting Primary care university hospital. Participants Eight able-bodied subjects underwent tibia length measurement. A separate cohort of 7 men with SCI and 7 able-bodied age-matched male controls underwent pQCT analysis. Interventions Not applicable. Main Outcome Measures The projected worst-case tibia-length–measurement error translated into a pQCT slice placement error of ±3mm. We collected pQCT slices at the distal 4% tibia site, 3mm proximal and 3mm distal to that site, and then quantified BMD error attributable to slice placement. Results Absolute BMD error was greater for able-bodied than for SCI subjects (5.87mg/cm3 vs 4.5mg/cm3). However, the percentage error in BMD was larger for SCI than able-bodied subjects (4.56% vs 2.23%). Conclusions During cross-sectional studies of various populations, BMD differences up to 5% may be attributable to variation in limb-length–measurement error. PMID:17023249
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Evaluation of statistical models for forecast errors from the HBV model
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur
2010-04-01
SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.
Assessment of Levels of Ultraviolet A Light Protection in Automobile Windshields and Side Windows.
Boxer Wachler, Brian S
2016-07-01
Ultraviolet A (UV-A) light is associated with the risks of cataract and skin cancer. To assess the level of UV-A light protection in the front windshields and side windows of automobiles. In this cross-sectional study, 29 automobiles from 15 automobile manufacturers were analyzed. The outside ambient UV-A radiation, along with UV-A radiation behind the front windshield and behind the driver's side window of all automobiles, was measured. The years of the automobiles ranged from 1990 to 2014, with an average year of 2010. The automobile dealerships were located in Los Angeles, California. Amount of UV-A blockage from windshields and side windows. The average percentage of front-windshield UV-A blockage was 96% (range, 95%-98% [95% CI, 95.7%-96.3%]) and was higher than the average percentage of side-window blockage, which was 71% (range, 44%-96% [95% CI, 66.4%-75.6%]). The difference between these average percentages is 25% (95% CI, 21%-30% [P < .001]). A high level of side-window UV-A blockage (>90%) was found in 4 of 29 automobiles (13.8%). The level of front-windshield UV-A protection was consistently high among automobiles. The level of side-window UV-A protection was lower and highly variable. These results may in part explain the reported increased rates of cataract in left eyes and left-sided facial skin cancer. Automakers may wish to consider increasing the degree of UV-A protection in the side windows of automobiles.
ERIC Educational Resources Information Center
Schumacher, Robin F.; Malone, Amelia S.
2017-01-01
The goal of the present study was to describe fraction-calculation errors among 4th-grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We…
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida
Turner, J.F.
1979-01-01
A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)
ERROR IN ANNUAL AVERAGE DUE TO USE OF LESS THAN EVERYDAY MEASUREMENTS
Long term averages of the concentration of PM mass and components are of interest for determining compliance with annual averages, for developing exposure surrogated for cross-sectional epidemiologic studies of the long-term of PM, and for determination of aerosol sources by chem...