Mendiburu, Andrés Z; de Carvalho, João A; Coronado, Christian R
2015-03-21
Estimation of the lower flammability limits of C-H compounds at 25 °C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 C-H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C-H compounds. When tested for a temperature range from 5 °C to 100 °C, the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. Copyright © 2014 Elsevier B.V. All rights reserved.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.
Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.
2013-01-01
Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2012 CFR
2012-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2010 CFR
2010-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2014 CFR
2014-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2013 CFR
2013-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2011 CFR
2011-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
[Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].
Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang
2016-07-12
To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.
Radiometric properties of the NS001 Thematic Mapper Simulator aircraft multispectral scanner
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Ahmad, Suraiya P.
1990-01-01
Laboratory tests of the NS001 TM are described emphasizing absolute calibration to determine the radiometry of the simulator's reflective channels. In-flight calibration of the data is accomplished with the NS001 internal integrating-sphere source because instabilities in the source can limit the absolute calibration. The data from 1987-89 indicate uncertainties of up to 25 percent with an apparent average uncertainty of about 15 percent. Also identified are dark current drift and sensitivity changes along the scan line, random noise, and nonlinearity which contribute errors of 1-2 percent. Uncertainties similar to hysteresis are also noted especially in the 2.08-2.35-micron range which can reduce sensitivity and cause errors. The NS001 TM Simulator demonstrates a polarization sensitivity that can generate errors of up to about 10 percent depending on the wavelength.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-21
... other errors, would result in (1) a change of at least five absolute percentage points in, but not less...) preliminary determination, or (2) a difference between a weighted-average dumping margin of zero or de minimis...
Accuracy of free energies of hydration using CM1 and CM3 atomic charges.
Udier-Blagović, Marina; Morales De Tirado, Patricia; Pearlman, Shoshannah A; Jorgensen, William L
2004-08-01
Absolute free energies of hydration (DeltaGhyd) have been computed for 25 diverse organic molecules using partial atomic charges derived from AM1 and PM3 wave functions via the CM1 and CM3 procedures of Cramer, Truhlar, and coworkers. Comparisons are made with results using charges fit to the electrostatic potential surface (EPS) from ab initio 6-31G* wave functions and from the OPLS-AA force field. OPLS Lennard-Jones parameters for the organic molecules were used together with the TIP4P water model in Monte Carlo simulations with free energy perturbation theory. Absolute free energies of hydration were computed for OPLS united-atom and all-atom methane by annihilating the solutes in water and in the gas phase, and absolute DeltaGhyd values for all other molecules were computed via transformation to one of these references. Optimal charge scaling factors were determined by minimizing the unsigned average error between experimental and calculated hydration free energies. The PM3-based charge models do not lead to lower average errors than obtained with the EPS charges for the subset of 13 molecules in the original study. However, improvement is obtained by scaling the CM1A partial charges by 1.14 and the CM3A charges by 1.15, which leads to average errors of 1.0 and 1.1 kcal/mol for the full set of 25 molecules. The scaled CM1A charges also yield the best results for the hydration of amides including the E/Z free-energy difference for N-methylacetamide in water. Copyright 2004 Wiley Periodicals, Inc.
Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data
Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha
2016-01-01
Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
... errors, (1) would result in a change of at least five absolute percentage points in, but not less than 25... determination; or (2) would result in a difference between a weighted-average dumping margin of zero or de...
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida
Turner, J.F.
1979-01-01
A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)
Error analysis of 3D-PTV through unsteady interfaces
NASA Astrophysics Data System (ADS)
Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier
2018-03-01
The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.
Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo
2014-05-19
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-03-23
We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.
Liu, Min Hsien; Chen, Cheng; Hong, Yaw Shun
2005-02-08
A three-parametric modification equation and the least-squares approach are adopted to calibrating hybrid density-functional theory energies of C(1)-C(10) straight-chain aldehydes, alcohols, and alkoxides to accurate enthalpies of formation DeltaH(f) and Gibbs free energies of formation DeltaG(f), respectively. All calculated energies of the C-H-O composite compounds were obtained based on B3LYP6-311++G(3df,2pd) single-point energies and the related thermal corrections of B3LYP6-31G(d,p) optimized geometries. This investigation revealed that all compounds had 0.05% average absolute relative error (ARE) for the atomization energies, with mean value of absolute error (MAE) of just 2.1 kJ/mol (0.5 kcal/mol) for the DeltaH(f) and 2.4 kJ/mol (0.6 kcal/mol) for the DeltaG(f) of formation.
Reliable absolute analog code retrieval approach for 3D measurement
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun
2017-11-01
The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Comparative study of four time series methods in forecasting typhoid fever incidence in China.
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.
Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546
VizieR Online Data Catalog: R absolute magnitudes of Kuiper Belt objects (Peixinho+, 2012)
NASA Astrophysics Data System (ADS)
Peixinho, N.; Delsanti, A.; Guilbert-Lepoutre, A.; Gafeira, R.; Lacerda, P.
2012-06-01
Compilation of absolute magnitude HRα, B-R color spectral features used in this work. For each object, we computed the average color index from the different papers presenting data obtained simultaneously in B and R bands (e.g. contiguous observations within a same night). When individual R apparent magnitude and date were available, we computed the HRα=R-5log(r Delta), where R is the R-band magnitude, r and Delta are the helio- and geocentric distances at the time of observation in AU, respectively. When V and V-R colors were available, we derived an R and then HRα value. We did not correct for the phase-angle α effect. This table includes also spectral information on the presence of water ice, methanol, methane, or confirmed featureless spectra, as available in the literature. We highlight only the cases with clear bands in the spectrum, which were reported/confirmed by some other work. The 1st column indicates the object identification number and name or provisional designation; the 2nd column indicates the dynamical class; the 3rd column indicates the average HRα value and 1-σ error bars; the 4th column indicates the average $B-R$ color and 1-σ error bars; the 5th column indicates the most important spectral features detected; and the 6th column points to the bibliographic references used for each object. (3 data files).
Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry
2011-01-01
ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.
Worthmann, Brian M; Song, H C; Dowling, David R
2015-12-01
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Dromgoole, L; Alvarez, P
Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less
Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo
2014-01-01
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-01-01
Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573
Improving the Glucose Meter Error Grid With the Taguchi Loss Function.
Krouwer, Jan S
2016-07-01
Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.
Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques
NASA Astrophysics Data System (ADS)
Amoush, Ahmad
The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm, and 4cm, 5cm, 6cm, and 7cm, respectively. The Nucletron Freiburg flap applicator is used with the Nucletron remote afterloader HDR machine to deliver dose to surface cancers. Dosimetric data for the Nucletron 192Ir source were generated using Monte Carlo simulation and compared with the published data. Two dimensional dosimetric data were calculated at two source positions; at the center of the sphere of the applicator and between two adjacent spheres. Unlike the TPS dose algorithm, The Monte Carlo code developed for this research accounts for the applicator material, secondary electrons and delta particles, and the air gap between the skin and the applicator. *Standard Imaging, Inc., Middleton, Wisconsin USA † OneDose MOSFET, Sicel Technologies, Morrisville NC ‡ Los Alamos National Laboratory, NM USA
Zhang, Min; Xing, Yimeng; Zhang, Zhiguo; Chen, Qiguan
2014-12-12
A scheme for monitoring icing on overhead transmission lines with fiber Bragg grating (FBG) strain sensors is designed and evaluated both theoretically and experimentally. The influences of temperature and wind are considered. The results of field experiments using simulated ice loading on windless days indicate that the scheme is capable of monitoring the icing thickness within 0-30 mm with an accuracy of ±1 mm, a load cell error of 0.0308v, a repeatability error of 0.3328v and a hysteresis error is 0.026%. To improve the measurement during windy weather, a correction factor is added to the effective gravity acceleration, and the absolute FBG strain is replaced by its statistical average.
Flow interference in a variable porosity trisonic wind tunnel.
NASA Technical Reports Server (NTRS)
Davis, J. W.; Graham, R. F.
1972-01-01
Pressure data from a 20-degree cone-cylinder in a variable porosity wind tunnel for the Mach range 0.2 to 5.0 are compared to an interference free standard in order to determine wall interference effects. Four 20-degree cone-cylinder models representing an approximate range of percent blockage from one to six were compared to curve-fits of the interference free standard at each Mach number and errors determined at each pressure tap location. The average of the absolute values of the percent error over the length of the model was determined and used as the criterion for evaluating model blockage interference effects. The results are presented in the form of the percent error as a function of model blockage and Mach number.
Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui
2016-01-01
Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
Intra- and Interobserver Variability of Cochlear Length Measurements in Clinical CT.
Iyaniwura, John E; Elfarnawany, Mai; Riyahi-Alam, Sadegh; Sharma, Manas; Kassam, Zahra; Bureau, Yves; Parnes, Lorne S; Ladak, Hanif M; Agrawal, Sumit K
2017-07-01
The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. Interobserver variability was good (average absolute difference: 0.77 ± 0.42 mm) using standard views and fair (average absolute difference: 0.90 ± 0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ± 0.09 mm for the standard views and 0.38 ± 0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.
Astigmatism error modification for absolute shape reconstruction using Fourier transform method
NASA Astrophysics Data System (ADS)
He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun
2014-12-01
A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.
Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.
2005-01-01
Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.
Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.
Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra
2014-04-01
To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA
NASA Astrophysics Data System (ADS)
Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.
2018-04-01
This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.
On the accuracy of the Head Impact Telemetry (HIT) System used in football helmets.
Jadischke, Ron; Viano, David C; Dau, Nathan; King, Albert I; McCarthy, Joe
2013-09-03
On-field measurement of head impacts has relied on the Head Impact Telemetry (HIT) System, which uses helmet mounted accelerometers to determine linear and angular head accelerations. HIT is used in youth and collegiate football to assess the frequency and severity of helmet impacts. This paper evaluates the accuracy of HIT for individual head impacts. Most HIT validations used a medium helmet on a Hybrid III head. However, the appropriate helmet is large based on the Hybrid III head circumference (58 cm) and manufacturer's fitting instructions. An instrumented skull cap was used to measure the pressure between the head of football players (n=63) and their helmet. The average pressure with a large helmet on the Hybrid III was comparable to the average pressure from helmets used by players. A medium helmet on the Hybrid III produced average pressures greater than the 99th percentile volunteer pressure level. Linear impactor tests were conducted using a large and medium helmet on the Hybrid III. Testing was conducted by two independent laboratories. HIT data were compared to data from the Hybrid III equipped with a 3-2-2-2 accelerometer array. The absolute and root mean square error (RMSE) for HIT were computed for each impact (n=90). Fifty-five percent (n=49) had an absolute error greater than 15% while the RMSE was 59.1% for peak linear acceleration. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wang, K W; Deng, C; Li, J P; Zhang, Y Y; Li, X Y; Wu, M C
2017-04-01
Tuberculosis (TB) affects people globally and is being reconsidered as a serious public health problem in China. Reliable forecasting is useful for the prevention and control of TB. This study proposes a hybrid model combining autoregressive integrated moving average (ARIMA) with a nonlinear autoregressive (NAR) neural network for forecasting the incidence of TB from January 2007 to March 2016. Prediction performance was compared between the hybrid model and the ARIMA model. The best-fit hybrid model was combined with an ARIMA (3,1,0) × (0,1,1)12 and NAR neural network with four delays and 12 neurons in the hidden layer. The ARIMA-NAR hybrid model, which exhibited lower mean square error, mean absolute error, and mean absolute percentage error of 0·2209, 0·1373, and 0·0406, respectively, in the modelling performance, could produce more accurate forecasting of TB incidence compared to the ARIMA model. This study shows that developing and applying the ARIMA-NAR hybrid model is an effective method to fit the linear and nonlinear patterns of time-series data, and this model could be helpful in the prevention and control of TB.
Clark, Ross A; Paterson, Kade; Ritchie, Callan; Blundell, Simon; Bryant, Adam L
2011-03-01
Commercial timing light systems (CTLS) provide precise measurement of athletes running velocity, however they are often expensive and difficult to transport. In this study an inexpensive, wireless and portable timing light system was created using the infrared camera in Nintendo Wii hand controllers (NWHC). System creation with gold-standard validation. A Windows-based software program using NWHC to replicate a dual-beam timing gate was created. Firstly, data collected during 2m walking and running trials were validated against a 3D kinematic system. Secondly, data recorded during 5m running trials at various intensities from standing or flying starts were compared to a single beam CTLS and the independent and average scores of three handheld stopwatch (HS) operators. Intraclass correlation coefficient and Bland-Altman plots were used to assess validity. Absolute error quartiles and percentage of trials in absolute error threshold ranges were used to determine accuracy. The NWHC system was valid when compared against the 3D kinematic system (ICC=0.99, median absolute error (MAR)=2.95%). For the flying 5m trials the NWHC system possessed excellent validity and precision (ICC=0.97, MAR<3%) when compared with the CTLS. In contrast, the NWHC system and the HS values during standing start trials possessed only modest validity (ICC<0.75) and accuracy (MAR>8%). A NWHC timing light system is inexpensive, portable and valid for assessing running velocity. Errors in the 5m standing start trials may have been due to erroneous event detection by either the commercial or NWHC-based timing light systems. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio
2008-07-19
The quantitative description of joint mechanics during movement requires the reconstruction of the position and orientation of selected anatomical axes with respect to a laboratory reference frame. These anatomical axes are identified through an ad hoc anatomical calibration procedure and their position and orientation are reconstructed relative to bone-embedded frames normally derived from photogrammetric marker positions and used to describe movement. The repeatability of anatomical calibration, both within and between subjects, is crucial for kinematic and kinetic end results. This paper illustrates an anatomical calibration approach, which does not require anatomical landmark manual palpation, described in the literature to be prone to great indeterminacy. This approach allows for the estimate of subject-specific bone morphology and automatic anatomical frame identification. The experimental procedure consists of digitization through photogrammetry of superficial points selected over the areas of the bone covered with a thin layer of soft tissue. Information concerning the location of internal anatomical landmarks, such as a joint center obtained using a functional approach, may also be added. The data thus acquired are matched with the digital model of a deformable template bone. Consequently, the repeatability of pelvis, knee and hip joint angles is determined. Five volunteers, each of whom performed five walking trials, and six operators, with no specific knowledge of anatomy, participated in the study. Descriptive statistics analysis was performed during upright posture, showing a limited dispersion of all angles (less than 3 deg) except for hip and knee internal-external rotation (6 deg and 9 deg, respectively). During level walking, the ratio of inter-operator and inter-trial error and an absolute subject-specific repeatability were assessed. For pelvic and hip angles, and knee flexion-extension the inter-operator error was equal to the inter-trial error-the absolute error ranging from 0.1 deg to 0.9 deg. Knee internal-external rotation and ab-adduction showed, on average, inter-operator errors, which were 8% and 28% greater than the relevant inter-trial errors, respectively. The absolute error was in the range 0.9-2.9 deg.
Crawford, Charles G.
1985-01-01
The modified tracer technique was used to determine reaeration-rate coefficients in the Wabash River in reaches near Lafayette and Terre Haute, Indiana, at streamflows ranging from 2,310 to 7,400 cu ft/sec. Chemically pure (CP grade) ethylene was used as the tracer gas, and rhodamine-WT dye was used as the dispersion-dilution tracer. Reaeration coefficients determined for a 13.5-mi reach near Terre Haute, Indiana, at streamflows of 3,360 and 7,400 cu ft/sec (71% and 43% flow duration) were 1.4/day and 1.1/day at 20 C, respectively. Reaeration-rate coefficients determined for a 18.4-mile reach near Lafayette, Indiana, at streamflows of 2,310 and 3,420 cu ft/sec (70% and 53 % flow duration), were 1.2/day and 0.8/day at 20 C, respectively. None of the commonly used equations found in the literature predicted reaeration-rate coefficients similar to those measured for reaches of the Wabash River near Lafayette and Terre Haute. The average absolute prediction error for 10 commonly used reaeration equations ranged from 22% to 154%. Prediction error was much smaller in the reach near Terre Haute than in the reach near Lafayette. The overall average of the absolute prediction error for all 10 equations was 22% for the reach near Terre Haute and 128% for the reach near Lafayette. Confidence limits of results obtained from the modified tracer technique were smaller than those obtained from the equations in the literature.
Zhang, Min; Xing, Yimeng; Zhang, Zhiguo; Chen, Qiguan
2014-01-01
A scheme for monitoring icing on overhead transmission lines with fiber Bragg grating (FBG) strain sensors is designed and evaluated both theoretically and experimentally. The influences of temperature and wind are considered. The results of field experiments using simulated ice loading on windless days indicate that the scheme is capable of monitoring the icing thickness within 0–30 mm with an accuracy of ±1 mm, a load cell error of 0.0308v, a repeatability error of 0.3328v and a hysteresis error is 0.026%. To improve the measurement during windy weather, a correction factor is added to the effective gravity acceleration, and the absolute FBG strain is replaced by its statistical average. PMID:25615733
Estimating the densities of benzene-derived explosives using atomic volumes.
Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka
2018-02-09
The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
Esophageal motion during radiotherapy: quantification and margin implications.
Cohen, R J; Paskalev, K; Litwin, S; Price, R A; Feigenberg, S J; Konski, A A
2010-08-01
The purpose was to evaluate interfraction and intrafraction esophageal motion in the right-left (RL) and anterior-posterior (AP) directions using computed tomography (CT) in esophageal cancer patients. Eight patients underwent CT simulation and CT-on-rails imaging before and after radiotherapy. Interfraction displacement was defined as differences between pretreatment and simulation images. Intrafraction displacement was defined as differences between pretreatment and posttreatment images. Images were fused using bone registries, adjusted to the carina. The mean, average of the absolute, and range of esophageal motion were calculated in the RL and AP directions, above and below the carina. Thirty-one CT image sets were obtained. The incidence of esophageal interfraction motion > or =5 mm was 24% and > or =10 mm was 3%; intrafraction motion > or =5 mm was 13% and > or =10 mm was 4%. The average RL motion was 1.8 +/- 5.1 mm, favoring leftward movement, and the average AP motion was 0.6 +/- 4.8 mm, favoring posterior movement. Average absolute motion was 4.2 mm or less in the RL and AP directions. Motion was greatest in the RL direction above the carina. Coverage of 95% of esophageal mobility requires 12 mm left, 8 mm right, 10 mm posterior, and 9 mm anterior margins. In all directions, the average of the absolute interfraction and intrafraction displacement was 4.2 mm or less. These results support a 12 mm left, 8 mm right, 10 mm posterior, and 9 mm anterior margin for internal target volume (ITV) and can guide margins for future intensity modulated radiation therapy (IMRT) trials to account for organ motion and set up error in three-dimensional planning.
Determination and error analysis of emittance and spectral emittance measurements by remote sensing
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Kumar, R.
1977-01-01
The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.
Li, Beiwen; Liu, Ziping; Zhang, Song
2016-10-03
We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.
Chen, Ting; Zhang, Miao; Jabbour, Salma; Wang, Hesheng; Barbee, David; Das, Indra J; Yue, Ning
2018-04-10
Through-plane motion introduces uncertainty in three-dimensional (3D) motion monitoring when using single-slice on-board imaging (OBI) modalities such as cine MRI. We propose a principal component analysis (PCA)-based framework to determine the optimal imaging plane to minimize the through-plane motion for single-slice imaging-based motion monitoring. Four-dimensional computed tomography (4DCT) images of eight thoracic cancer patients were retrospectively analyzed. The target volumes were manually delineated at different respiratory phases of 4DCT. We performed automated image registration to establish the 4D respiratory target motion trajectories for all patients. PCA was conducted using the motion information to define the three principal components of the respiratory motion trajectories. Two imaging planes were determined perpendicular to the second and third principal component, respectively, to avoid imaging with the primary principal component of the through-plane motion. Single-slice images were reconstructed from 4DCT in the PCA-derived orthogonal imaging planes and were compared against the traditional AP/Lateral image pairs on through-plane motion, residual error in motion monitoring, absolute motion amplitude error and the similarity between target segmentations at different phases. We evaluated the significance of the proposed motion monitoring improvement using paired t test analysis. The PCA-determined imaging planes had overall less through-plane motion compared against the AP/Lateral image pairs. For all patients, the average through-plane motion was 3.6 mm (range: 1.6-5.6 mm) for the AP view and 1.7 mm (range: 0.6-2.7 mm) for the Lateral view. With PCA optimization, the average through-plane motion was 2.5 mm (range: 1.3-3.9 mm) and 0.6 mm (range: 0.2-1.5 mm) for the two imaging planes, respectively. The absolute residual error of the reconstructed max-exhale-to-inhale motion averaged 0.7 mm (range: 0.4-1.3 mm, 95% CI: 0.4-1.1 mm) using optimized imaging planes, averaged 0.5 mm (range: 0.3-1.0 mm, 95% CI: 0.2-0.8 mm) using an imaging plane perpendicular to the minimal motion component only and averaged 1.3 mm (range: 0.4-2.8 mm, 95% CI: 0.4-2.3 mm) in AP/Lateral orthogonal image pairs. The root-mean-square error of reconstructed displacement was 0.8 mm for optimized imaging planes, 0.6 mm for imaging plane perpendicular to the minimal motion component only, and 1.6 mm for AP/Lateral orthogonal image pairs. When using the optimized imaging planes for motion monitoring, there was no significant absolute amplitude error of the reconstructed motion (P = 0.0988), while AP/Lateral images had significant error (P = 0.0097) with a paired t test. The average surface distance (ASD) between overlaid two-dimensional (2D) tumor segmentation at end-of-inhale and end-of-exhale for all eight patients was 0.6 ± 0.2 mm in optimized imaging planes and 1.4 ± 0.8 mm in AP/Lateral images. The Dice similarity coefficient (DSC) between overlaid 2D tumor segmentation at end-of-inhale and end-of-exhale for all eight patients was 0.96 ± 0.03 in optimized imaging planes and 0.89 ± 0.05 in AP/Lateral images. Both ASD (P = 0.034) and DSC (P = 0.022) were significantly improved in the optimized imaging planes. Motion monitoring using imaging planes determined by the proposed PCA-based framework had significantly improved performance. Single-slice image-based motion tracking can be used for clinical implementations such as MR image-guided radiation therapy (MR-IGRT). © 2018 American Association of Physicists in Medicine.
Hybrid empirical mode decomposition- ARIMA for forecasting exchange rates
NASA Astrophysics Data System (ADS)
Abadan, Siti Sarah; Shabri, Ani; Ismail, Shuhaida
2015-02-01
This paper studied the forecasting of monthly Malaysian Ringgit (MYR)/ United State Dollar (USD) exchange rates using the hybrid of two methods which are the empirical model decomposition (EMD) and the autoregressive integrated moving average (ARIMA). MYR is pegged to USD during the Asian financial crisis causing the exchange rates are fixed to 3.800 from 2nd of September 1998 until 21st of July 2005. Thus, the chosen data in this paper is the post-July 2005 data, starting from August 2005 to July 2010. The comparative study using root mean square error (RMSE) and mean absolute error (MAE) showed that the EMD-ARIMA outperformed the single-ARIMA and the random walk benchmark model.
Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles
ERIC Educational Resources Information Center
Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner
2016-01-01
This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, W; Jiang, M; Yin, F
Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, amore » Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results demonstrate that the ASMLP respiratory prediction method is more accurate than MLP method and can improve the respiration forecast accuracy.« less
NASA Astrophysics Data System (ADS)
Lahmiri, Salim; Boukadoum, Mounir
2015-08-01
We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.
Development of Bio-impedance Analyzer (BIA) for Body Fat Calculation
NASA Astrophysics Data System (ADS)
Riyadi, Munawar A.; Nugraha, A.; Santoso, M. B.; Septaditya, D.; Prakoso, T.
2017-04-01
Common weight scales cannot assess body composition or determine fat mass and fat-fress mass that make up the body weight. This research propose bio-impedance analysis (BIA) tool capable to body composition assessment. This tool uses four electrodes, two of which are used for 50 kHz sine wave current flow to the body and the rest are used to measure the voltage produced by the body for impedance analysis. Parameters such as height, weight, age, and gender are provided individually. These parameters together with impedance measurements are then in the process to produce a body fat percentage. The experimental result shows impressive repeatability for successive measurements (stdev ≤ 0.25% fat mass). Moreover, result on the hand to hand node scheme reveals average absolute difference of total subjects between two analyzer tools of 0.48% (fat mass) with maximum absolute discrepancy of 1.22% (fat mass). On the other hand, the relative error normalized to Omron’s HBF-306 as comparison tool reveals less than 2% relative error. As a result, the system performance offers good evaluation tool for fat mass in the body.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
Monthly streamflow forecasting with auto-regressive integrated moving average
NASA Astrophysics Data System (ADS)
Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani
2017-09-01
Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.
Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed
2015-01-01
This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299
NASA Technical Reports Server (NTRS)
Benedict, G. F.; McArthur, Barbara E.; Napiwotzki, Ralf; Harrison, Thomas E.; Harris, Hugh C.; Nelan, Edmund; Bond, Howard E; Patterson, Richard J.; Ciardullo, Robin
2009-01-01
We present absolute parallaxes and relative proper motions for the central stars of the planetary nebulae NGC 6853 (The Dumbbell), NGC 7293 (The Helix), Abell 31, and DeHt 5. This paper details our reduction and analysis using DeHt 5 as an example. We obtain these planetary nebula nuclei (PNNi) parallaxes with astrometric data from Fine Guidance Sensors FGS 1r and FGS 3, white-light interferometers on the Hubble Space Telescope. Proper motions, spectral classifications and VJHKT2M and DDO51 photometry of the stars comprising the astrometric reference frames provide spectrophotometric estimates of reference star absolute parallaxes. Introducing these into our model as observations with error, we determine absolute parallaxes for each PNN. Weighted averaging with previous independent parallax measurements yields an average parallax precision, sigma (sub pi)/ pi = 5%. Derived distances are: d(sub NGC6853) = 405(exp +28 sub -25) pc, d(sub NGC7293) = 216(exp +14 sub -12) pc, d(sub Abell31) = 621(exp +91 sub -70) pc, and d(sub DeHt5) = 345(exp +19 sub -17) pc. These PNNi distances are all smaller than previously derived from spectroscopic analyses of the central stars. To obtain absolute magnitudes from these distances requires estimates of interstellar extinction. We average extinction measurements culled from the literature, from reddening based on PNNi intrinsic colors derived from model SEDs, and an assumption that each PNN experiences the same rate of extinction as a function of distance as do the reference stars nearest (in angular separation) to each central star. We also apply Lutz-Kelker bias corrections. The absolute magnitudes and effective temperatures permit estimates of PNNi radii through both the Stefan-Boltzmann relation and Eddington fluxes. Comparing absolute magnitudes with post-AGB models provides mass estimates. Masses cluster around 0.57 solar Mass, close to the peak of the white dwarf mass distribution. Adding a few more PNNi with well-determined distances and masses, we compare all the PNNi with cooler white dwarfs of similar mass, and confirm, as expected, that PNNi have larger radii than white dwarfs that have reached their final cooling tracks.
Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes
Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.
2004-01-01
We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.
Juodzbaliene, Vilma; Darbutas, Tomas; Skurvydas, Albertas
2016-01-01
The aim of the study was to determine the effect of different muscle length and visual feedback information (VFI) on accuracy of isometric contraction of elbow flexors in men after an ischemic stroke (IS). Materials and Methods. Maximum voluntary muscle contraction force (MVMCF) and accurate determinate muscle force (20% of MVMCF) developed during an isometric contraction of elbow flexors in 90° and 60° of elbow flexion were measured by an isokinetic dynamometer in healthy subjects (MH, n = 20) and subjects after an IS during their postrehabilitation period (MS, n = 20). Results. In order to evaluate the accuracy of the isometric contraction of the elbow flexors absolute errors were calculated. The absolute errors provided information about the difference between determinate and achieved muscle force. Conclusions. There is a tendency that greater absolute errors generating determinate force are made by MH and MS subjects in case of a greater elbow flexors length despite presence of VFI. Absolute errors also increase in both groups in case of a greater elbow flexors length without VFI. MS subjects make greater absolute errors generating determinate force without VFI in comparison with MH in shorter elbow flexors length. PMID:27042670
A new method to estimate average hourly global solar radiation on the horizontal surface
NASA Astrophysics Data System (ADS)
Pandey, Pramod K.; Soupir, Michelle L.
2012-10-01
A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.
Calculating tumor trajectory and dose-of-the-day using cone-beam CT projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Bernard L., E-mail: bernard.jones@ucdenver.edu; Westerly, David; Miften, Moyed
2015-02-15
Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. The authors developed and validated a method which uses these projections to determine the trajectory of and dose to highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, the trajectory of which mimicked a lung tumor with high amplitude (up to 2.5 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each CBCT projection, and a Gaussian probability density function for themore » absolute BB position was calculated which best fit the observed trajectory of the BB in the imager geometry. Two modifications of the trajectory reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation (Phase), and second, using the Monte Carlo (MC) method to sample the estimated Gaussian tumor position distribution. The accuracies of the proposed methods were evaluated by comparing the known and calculated BB trajectories in phantom-simulated clinical scenarios using abdominal tumor volumes. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square trajectory errors averaged 3.8% ± 1.1% of the marker amplitude. Dosimetric calculations using Phase methods were more accurate, with mean absolute error less than 0.5%, and with error less than 1% in the highest-noise trajectory. MC-based trajectories prevent the overestimation of dose, but when viewed in an absolute sense, add a small amount of dosimetric error (<0.1%). Conclusions: Marker trajectory and target dose-of-the-day were accurately calculated using CBCT projections. This technique provides a method to evaluate highly mobile tumors using ordinary CBCT data, and could facilitate better strategies to mitigate or compensate for motion during stereotactic body radiotherapy.« less
Bathymetric surveying with GPS and heave, pitch, and roll compensation
Work, P.A.; Hansen, M.; Rogers, W.E.
1998-01-01
Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.
Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A
2016-03-01
Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
[Simulation of cropland soil moisture based on an ensemble Kalman filter].
Liu, Zhao; Zhou, Yan-Lian; Ju, Wei-Min; Gao, Ping
2011-11-01
By using an ensemble Kalman filter (EnKF) to assimilate the observed soil moisture data, the modified boreal ecosystem productivity simulator (BEPS) model was adopted to simulate the dynamics of soil moisture in winter wheat root zones at Xuzhou Agro-meteorological Station, Jiangsu Province of China during the growth seasons in 2000-2004. After the assimilation of observed data, the determination coefficient, root mean square error, and average absolute error of simulated soil moisture were in the ranges of 0.626-0.943, 0.018-0.042, and 0.021-0.041, respectively, with the simulation precision improved significantly, as compared with that before assimilation, indicating the applicability of data assimilation in improving the simulation of soil moisture. The experimental results at single point showed that the errors in the forcing data and observations and the frequency and soil depth of the assimilation of observed data all had obvious effects on the simulated soil moisture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldegunde, Manuel, E-mail: M.A.Aldegunde-Rodriguez@warwick.ac.uk; Kermode, James R., E-mail: J.R.Kermode@warwick.ac.uk; Zabaras, Nicholas
This paper presents the development of a new exchange–correlation functional from the point of view of machine learning. Using atomization energies of solids and small molecules, we train a linear model for the exchange enhancement factor using a Bayesian approach which allows for the quantification of uncertainties in the predictions. A relevance vector machine is used to automatically select the most relevant terms of the model. We then test this model on atomization energies and also on bulk properties. The average model provides a mean absolute error of only 0.116 eV for the test points of the G2/97 set butmore » a larger 0.314 eV for the test solids. In terms of bulk properties, the prediction for transition metals and monovalent semiconductors has a very low test error. However, as expected, predictions for types of materials not represented in the training set such as ionic solids show much larger errors.« less
Pole of rotating analysis of present-day Juan de Fuca plate motion
NASA Technical Reports Server (NTRS)
Nishimura, C.; Wilson, D. S.; Hey, R. N.
1984-01-01
Convergence rates between the Juan de Fuca and North American plates are calculated by means of their relative, present-day pole of rotation. A method of calculating the propagation of errors in addition to the instantaneous poles of rotation is also formulated and applied to determine the Euler pole for Pacific-Juan de Fuca. This pole is vectorially added to previously published poles for North America-Pacific and 'hot spot'-Pacific to obtain North America-Juan de Fuca and 'hot spot'-Juan de Fuca, respectively. The errors associated with these resultant poles are determined by propagating the errors of the two summed angular velocity vectors. Under the assumption that hot spots are fixed with respect to a mantle reference frame, the average absolute velocity of the Juan de Puca plate is computed at approximately 15 mm/yr, thereby making it the slowest-moving of the oceanic plates.
Measurement effects of seasonal and monthly variability on pedometer-determined data.
Kang, Minsoo; Bassett, David R; Barreira, Tiago V; Tudor-Locke, Catrine; Ainsworth, Barbara E
2012-03-01
The seasonal and monthly variability of pedometer-determined physical activity and its effects on accurate measurement have not been examined. The purpose of the study was to reduce measurement error in step-count data by controlling a) the length of the measurement period and b) the season or month of the year in which sampling was conducted. Twenty-three middle-aged adults were instructed to wear a Yamax SW-200 pedometer over 365 consecutive days. The step-count measurement periods of various lengths (eg, 2, 3, 4, 5, 6, 7 days, etc.) were randomly selected 10 times for each season and month. To determine accurate estimates of yearly step-count measurement, mean absolute percentage error (MAPE) and bias were calculated. The year-round average was considered as a criterion measure. A smaller MAPE and bias represent a better estimate. Differences in MAPE and bias among seasons were trivial; however, they varied among different months. The months in which seasonal changes occur presented the highest MAPE and bias. Targeting the data collection during certain months (eg, May) may reduce pedometer measurement error and provide more accurate estimates of year-round averages.
Analysis and application of classification methods of complex carbonate reservoirs
NASA Astrophysics Data System (ADS)
Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei
2018-06-01
There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.
Prediction of stream volatilization coefficients
Rathbun, Ronald E.
1990-01-01
Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Instrument Pointing Capabilities: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam;
2011-01-01
This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.
NASA Astrophysics Data System (ADS)
Wu, Wei; Xu, An-Ding; Liu, Hong-Bin
2015-01-01
Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.
Wu, Wei; Guo, Junqiao; An, Shuyi; Guan, Peng; Ren, Yangwu; Xia, Linzi; Zhou, Baosen
2015-01-01
Cases of hemorrhagic fever with renal syndrome (HFRS) are widely distributed in eastern Asia, especially in China, Russia, and Korea. It is proved to be a difficult task to eliminate HFRS completely because of the diverse animal reservoirs and effects of global warming. Reliable forecasting is useful for the prevention and control of HFRS. Two hybrid models, one composed of nonlinear autoregressive neural network (NARNN) and autoregressive integrated moving average (ARIMA) the other composed of generalized regression neural network (GRNN) and ARIMA were constructed to predict the incidence of HFRS in the future one year. Performances of the two hybrid models were compared with ARIMA model. The ARIMA, ARIMA-NARNN ARIMA-GRNN model fitted and predicted the seasonal fluctuation well. Among the three models, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of ARIMA-NARNN hybrid model was the lowest both in modeling stage and forecasting stage. As for the ARIMA-GRNN hybrid model, the MSE, MAE and MAPE of modeling performance and the MSE and MAE of forecasting performance were less than the ARIMA model, but the MAPE of forecasting performance did not improve. Developing and applying the ARIMA-NARNN hybrid model is an effective method to make us better understand the epidemic characteristics of HFRS and could be helpful to the prevention and control of HFRS.
Why GPS makes distances bigger than they are
Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried
2016-01-01
ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610
A Hybrid Model for Predicting the Prevalence of Schistosomiasis in Humans of Qianjiang City, China
Wang, Ying; Lu, Zhouqin; Tian, Lihong; Tan, Li; Shi, Yun; Nie, Shaofa; Liu, Li
2014-01-01
Backgrounds/Objective Schistosomiasis is still a major public health problem in China, despite the fact that the government has implemented a series of strategies to prevent and control the spread of the parasitic disease. Advanced warning and reliable forecasting can help policymakers to adjust and implement strategies more effectively, which will lead to the control and elimination of schistosomiasis. Our aim is to explore the application of a hybrid forecasting model to track the trends of the prevalence of schistosomiasis in humans, which provides a methodological basis for predicting and detecting schistosomiasis infection in endemic areas. Methods A hybrid approach combining the autoregressive integrated moving average (ARIMA) model and the nonlinear autoregressive neural network (NARNN) model to forecast the prevalence of schistosomiasis in the future four years. Forecasting performance was compared between the hybrid ARIMA-NARNN model, and the single ARIMA or the single NARNN model. Results The modelling mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model was 0.1869×10−4, 0.0029, 0.0419 with a corresponding testing error of 0.9375×10−4, 0.0081, 0.9064, respectively. These error values generated with the hybrid model were all lower than those obtained from the single ARIMA or NARNN model. The forecasting values were 0.75%, 0.80%, 0.76% and 0.77% in the future four years, which demonstrated a no-downward trend. Conclusion The hybrid model has high quality prediction accuracy in the prevalence of schistosomiasis, which provides a methodological basis for future schistosomiasis monitoring and control strategies in the study area. It is worth attempting to utilize the hybrid detection scheme in other schistosomiasis-endemic areas including other infectious diseases. PMID:25119882
Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...
Globular Clusters: Absolute Proper Motions and Galactic Orbits
NASA Astrophysics Data System (ADS)
Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.
2018-04-01
We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.
A soft-computing methodology for noninvasive time-spatial temperature estimation.
Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A
2008-02-01
The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.
Olsen, Thomas
2007-02-01
This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p < 0.00001). The number of predictions within +/- 0.5 D, +/- 1.0 D and +/- 2.0 D of the expected outcome was 62.5%, 92.4% and 99.9% with PCI, compared with 45.5%, 77.3% and 98.4% with ultrasound, respectively (p < 0.00001). The 2-variable ACD method resulted in an average error in PCI predictions of 0.46 D, which was significantly higher than the error in the 5-variable method (p < 0.001). The accuracy of IOL power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.
Calculating the Solubilities of Drugs and Drug-Like Compounds in Octanol.
Alantary, Doaa; Yalkowsky, Samuel
2016-09-01
A modification of the Van't Hoff equation is used to predict the solubility of organic compounds in dry octanol. The new equation describes a linear relationship between the logarithm of the solubility of a solute in octanol to its melting temperature. More than 620 experimentally measured octanol solubilities, collected from the literature, are used to validate the equation without using any regression or fitting. The average absolute error of the prediction is 0.66 log units. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gao, Zhi-yu; Kang, Yu; Li, Yan-shuai; Meng, Chao; Pan, Tao
2018-04-01
Elevated-temperature flow behavior of a novel Ni-Cr-Mo-B ultra-heavy-plate steel was investigated by conducting hot compressive deformation tests on a Gleeble-3800 thermo-mechanical simulator at a temperature range of 1123 K–1423 K with a strain rate range from 0.01 s‑1 to10 s‑1 and a height reduction of 70%. Based on the experimental results, classic strain-compensated Arrhenius-type, a new revised strain-compensated Arrhenius-type and classic modified Johnson-Cook constitutive models were developed for predicting the high-temperature deformation behavior of the steel. The predictability of these models were comparatively evaluated in terms of statistical parameters including correlation coefficient (R), average absolute relative error (AARE), average root mean square error (RMSE), normalized mean bias error (NMBE) and relative error. The statistical results indicate that the new revised strain-compensated Arrhenius-type model could give prediction of elevated-temperature flow stress for the steel accurately under the entire process conditions. However, the predicted values by the classic modified Johnson-Cook model could not agree well with the experimental values, and the classic strain-compensated Arrhenius-type model could track the deformation behavior more accurately compared with the modified Johnson-Cook model, but less accurately with the new revised strain-compensated Arrhenius-type model. In addition, reasons of differences in predictability of these models were discussed in detail.
NASA Astrophysics Data System (ADS)
Radziukynas, V.; Klementavičius, A.
2016-04-01
The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
Cossich, Victor; Mallrich, Frédéric; Titonelli, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio
2014-01-01
To ascertain whether the proprioceptive deficit in the sense of joint position continues to be present when patients with a limb presenting a deficient anterior cruciate ligament (ACL) are assessed by testing their active reproduction of joint position, in comparison with the contralateral limb. Twenty patients with unilateral ACL tearing participated in the study. Their active reproduction of joint position in the limb with the deficient ACL and in the healthy contralateral limb was tested. Meta-positions of 20% and 50% of the maximum joint range of motion were used. Proprioceptive performance was determined through the values of the absolute error, variable error and constant error. Significant differences in absolute error were found at both of the positions evaluated, and in constant error at 50% of the maximum joint range of motion. When evaluated in terms of absolute error, the proprioceptive deficit continues to be present even when an active evaluation of the sense of joint position is made. Consequently, this sense involves activity of both intramuscular and tendon receptors.
NASA Astrophysics Data System (ADS)
Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.
2015-11-01
An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Absolute calibration of optical flats
Sommargren, Gary E.
2005-04-05
The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.
Modeling Heavy/Medium-Duty Fuel Consumption Based on Drive Cycle Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lijuan; Duran, Adam; Gonder, Jeffrey
This paper presents multiple methods for predicting heavy/medium-duty vehicle fuel consumption based on driving cycle information. A polynomial model, a black box artificial neural net model, a polynomial neural network model, and a multivariate adaptive regression splines (MARS) model were developed and verified using data collected from chassis testing performed on a parcel delivery diesel truck operating over the Heavy Heavy-Duty Diesel Truck (HHDDT), City Suburban Heavy Vehicle Cycle (CSHVC), New York Composite Cycle (NYCC), and hydraulic hybrid vehicle (HHV) drive cycles. Each model was trained using one of four drive cycles as a training cycle and the other threemore » as testing cycles. By comparing the training and testing results, a representative training cycle was chosen and used to further tune each method. HHDDT as the training cycle gave the best predictive results, because HHDDT contains a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. Among the four model approaches, MARS gave the best predictive performance, with an average absolute percent error of -1.84% over the four chassis dynamometer drive cycles. To further evaluate the accuracy of the predictive models, the approaches were first applied to real-world data. MARS outperformed the other three approaches, providing an average absolute percent error of -2.2% of four real-world road segments. The MARS model performance was then compared to HHDDT, CSHVC, NYCC, and HHV drive cycles with the performance from Future Automotive System Technology Simulator (FASTSim). The results indicated that the MARS method achieved a comparative predictive performance with FASTSim.« less
Instrumentation for measuring dynamic spinal load moment exposures in the workplace.
Marras, William S; Lavender, Steven A; Ferguson, Sue A; Splittstoesser, Riley E; Yang, Gang; Schabo, Pete
2010-02-01
Prior research has shown the load moment exposure to be one of the strongest predictors of low back disorder risk in manufacturing jobs. However, to extend these finding to the manual lifting and handling of materials in distribution centers, where the layout of the lifting task changes from one lift to the next and the lifts are highly dynamic, would be very challenging without an automated means of quantifying reach distances and item weights. The purpose of this paper is to describe the development and validation of automated instrumentation, the Moment Exposure Tracking System (METS), designed to capture the dynamic load moment exposures and spine postures used in distribution center jobs. This multiphase process started by obtaining baseline data describing the accuracy of existing manual methods for obtaining moment arms during the observation of dynamic lifting for the purposes of benchmarking the automated system. The process continued with the development and calibration of an ultrasonic system to track hand location and the development of load sensing handles that could be used to assess item weights. The final version of the system yielded an average absolute error in the load's moment arm of 4.1cm under the conditions of trunk flexion and load asymmetry. This compares well with the average absolute error of 10.9cm obtained using manual methods of measuring moment arms. With the item mass estimates being within half a kilogram, the instrumentation provides a reliable and valid means for assessing dynamic load moment exposures in dynamic distribution center lifting tasks.
In Situ Height and Width Estimation of Sorghum Plants from 2.5d Infrared Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baharav, Tavor; Bariya, Mohini; Zakhor, Avideh
Plant phenotyping, or the measurement of plant traits such as stem width and plant height, is a critical step in the development and evaluation of higher yield biofuel crops. Phenotyping allows biologists to quantitatively estimate the biomass of plant varieties and therefore their potential for biofuel production. Manual phenotyping is costly, time-consuming, and errorprone, requiring a person to walk through the fields measuring individual plants with a tape measure and notebook. In this work we describe an alternative system consisting of an autonomous robot equipped with two infrared cameras that travels through fields, collecting 2.5D image data of sorghum plants.more » We develop novel image processing based algorithms to estimate plant height and stem width from the image data. Our proposed method has the advantage of working in situ using images of plants from only one side. This allows phenotypic data to be collected nondestructively throughout the growing cycle, providing biologists with valuable information on crop growth patterns. Our approach first estimates plant heights and stem widths from individual frames. It then uses tracking algorithms to refine these estimates across frames and avoid double counting the same plant in multiple frames. The result is a histogram of stem widths and plant heights for each plot of a particular genetically engineered sorghum variety. In-field testing and comparison with human collected ground truth data demonstrates that our system achieves 13% average absolute error for stem width estimation and 15% average absolute error for plant height estimation.« less
In Situ Height and Width Estimation of Sorghum Plants from 2.5d Infrared Images
Baharav, Tavor; Bariya, Mohini; Zakhor, Avideh
2017-01-29
Plant phenotyping, or the measurement of plant traits such as stem width and plant height, is a critical step in the development and evaluation of higher yield biofuel crops. Phenotyping allows biologists to quantitatively estimate the biomass of plant varieties and therefore their potential for biofuel production. Manual phenotyping is costly, time-consuming, and errorprone, requiring a person to walk through the fields measuring individual plants with a tape measure and notebook. In this work we describe an alternative system consisting of an autonomous robot equipped with two infrared cameras that travels through fields, collecting 2.5D image data of sorghum plants.more » We develop novel image processing based algorithms to estimate plant height and stem width from the image data. Our proposed method has the advantage of working in situ using images of plants from only one side. This allows phenotypic data to be collected nondestructively throughout the growing cycle, providing biologists with valuable information on crop growth patterns. Our approach first estimates plant heights and stem widths from individual frames. It then uses tracking algorithms to refine these estimates across frames and avoid double counting the same plant in multiple frames. The result is a histogram of stem widths and plant heights for each plot of a particular genetically engineered sorghum variety. In-field testing and comparison with human collected ground truth data demonstrates that our system achieves 13% average absolute error for stem width estimation and 15% average absolute error for plant height estimation.« less
Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system
NASA Astrophysics Data System (ADS)
Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.
2016-11-01
An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.
Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system.
Baltzer, M M; Craig, D; Den Hartog, D J; Nishizawa, T; Nornberg, M D
2016-11-01
An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error
ERIC Educational Resources Information Center
Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam
2009-01-01
Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…
Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong
2017-06-01
The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.
Rosenblum, Uri; Melzer, Itshak
2017-01-01
About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A159).
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
Wu, Wei; Guo, Junqiao; An, Shuyi; Guan, Peng; Ren, Yangwu; Xia, Linzi; Zhou, Baosen
2015-01-01
Background Cases of hemorrhagic fever with renal syndrome (HFRS) are widely distributed in eastern Asia, especially in China, Russia, and Korea. It is proved to be a difficult task to eliminate HFRS completely because of the diverse animal reservoirs and effects of global warming. Reliable forecasting is useful for the prevention and control of HFRS. Methods Two hybrid models, one composed of nonlinear autoregressive neural network (NARNN) and autoregressive integrated moving average (ARIMA) the other composed of generalized regression neural network (GRNN) and ARIMA were constructed to predict the incidence of HFRS in the future one year. Performances of the two hybrid models were compared with ARIMA model. Results The ARIMA, ARIMA-NARNN ARIMA-GRNN model fitted and predicted the seasonal fluctuation well. Among the three models, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of ARIMA-NARNN hybrid model was the lowest both in modeling stage and forecasting stage. As for the ARIMA-GRNN hybrid model, the MSE, MAE and MAPE of modeling performance and the MSE and MAE of forecasting performance were less than the ARIMA model, but the MAPE of forecasting performance did not improve. Conclusion Developing and applying the ARIMA-NARNN hybrid model is an effective method to make us better understand the epidemic characteristics of HFRS and could be helpful to the prevention and control of HFRS. PMID:26270814
The Estimation of Gestational Age at Birth in Database Studies.
Eberg, Maria; Platt, Robert W; Filion, Kristian B
2017-11-01
Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.
Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots
NASA Astrophysics Data System (ADS)
WANG, Wei; WANG, Lei; YUN, Chao
2017-03-01
Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.
Dantas, Jose Luiz; Pereira, Gleber; Nakamura, Fabio Yuzo
2015-09-01
The five-kilometer time trial (TT5km) has been used to assess aerobic endurance performance without further investigation of its validity. This study aimed to perform a preliminary validation of the TT5km to rank well-trained cyclists based on aerobic endurance fitness and assess changes of the aerobic endurance performance. After the incremental test, 20 cyclists (age = 31.3 ± 7.9 years; body mass index = 22.7 ± 1.5 kg/m(2); maximal aerobic power = 360.5 ± 49.5 W) performed the TT5km twice, collecting performance (time to complete, absolute and relative power output, average speed) and physiological responses (heart rate and electromyography activity). The validation criteria were pacing strategy, absolute and relative reliability, validity, and sensitivity. Sensitivity index was obtained from the ratio between the smallest worthwhile change and typical error. The TT5km showed high absolute (coefficient of variation < 3%) and relative (intraclass coefficient correlation > 0.95) reliability of performance variables, whereas it presented low reliability of physiological responses. The TT5km performance variables were highly correlated with the aerobic endurance indices obtained from incremental test (r > 0.70). These variables showed adequate sensitivity index (> 1). TT5km is a valid test to rank the aerobic endurance fitness of well-trained cyclists and to differentiate changes on aerobic endurance performance. Coaches can detect performance changes through either absolute (± 17.7 W) or relative power output (± 0.3 W.kg(-1)), the time to complete the test (± 13.4 s) and the average speed (± 1.0 km.h(-1)). Furthermore, TT5km performance can also be used to rank the athletes according to their aerobic endurance fitness.
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1985-01-01
An high performance liquid chromatography (HPLC) method to estimate four aromatic classes in middistillate fuels is presented. Average refractive indices are used in a correlation to obtain the concentrations of each of the aromatic classes from HPLC data. The aromatic class concentrations can be obtained in about 15 min when the concentration of the aromatic group is known. Seven fuels with a wide range of compositions were used to test the method. Relative errors in the concentration of the two major aromatic classes were not over 10 percent. Absolute errors of the minor classes were all less than 0.3 percent. The data show that errors in group-type analyses using sulfuric acid derived standards are greater for fuels containing high concentrations of polycyclic aromatics. Corrections are based on the change in refractive index of the aromatic fraction which can occur when sulfuric acid and the fuel react. These corrections improved both the precision and the accuracy of the group-type results.
Indirect Validation of Probe Speed Data on Arterial Corridors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eshragh, Sepideh; Young, Stanley E.; Sharifi, Elham
This study aimed to estimate the accuracy of probe speed data on arterial corridors on the basis of roadway geometric attributes and functional classification. It was assumed that functional class (medium and low) along with other road characteristics (such as weighted average of the annual average daily traffic, average signal density, average access point density, and average speed) were available as correlation factors to estimate the accuracy of probe traffic data. This study tested these factors as predictors of the fidelity of probe traffic data by using the results of an extensive validation exercise. This study showed strong correlations betweenmore » these geometric attributes and the accuracy of probe data when they were assessed by using average absolute speed error. Linear models were regressed to existing data to estimate appropriate models for medium- and low-type arterial corridors. The proposed models for medium- and low-type arterials were validated further on the basis of the results of a slowdown analysis. These models can be used to predict the accuracy of probe data indirectly in medium and low types of arterial corridors.« less
Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model
NASA Astrophysics Data System (ADS)
Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.
2015-12-01
Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Y; Macq, B; Bondar, L
Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.
Absolute Timing of the Crab Pulsar with RXTE
NASA Technical Reports Server (NTRS)
Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.
2004-01-01
We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.
Murata, Hiroshi; Araie, Makoto; Asaoka, Ryo
2014-11-20
We generated a variational Bayes model to predict visual field (VF) progression in glaucoma patients. This retrospective study included VF series from 911 eyes of 547 glaucoma patients as test data, and VF series from 5049 eyes of 2858 glaucoma patients as training data. Using training data, variational Bayes linear regression (VBLR) was created to predict VF progression. The performance of VBLR was compared against ordinary least-squares linear regression (OLSLR) by predicting VFs in the test dataset. The total deviation (TD) values of test patients' 11th VFs were predicted using TD values from their second to 10th VFs (VF2-10), the root mean squared error (RMSE) associated with each approach then was calculated. Similarly, mean TD (mTD) of test patients' 11th VFs was predicted using VBLR and OLSLR, and the absolute prediction errors compared. The RMSE resulting from VBLR averaged 3.9 ± 2.1 (SD) and 4.9 ± 2.6 dB for prediction based on the second to 10th VFs (VF2-10) and the second to fourth VFs (VF2-4), respectively. The RMSE resulting from OLSLR was 4.1 ± 2.0 (VF2-10) and 19.9 ± 12.0 (VF2-4) dB. The absolute prediction error (SD) for mTD using VBLR was 1.2 ± 1.3 (VF2-10) and 1.9 ± 2.0 (VF2-4) dB, while the prediction error resulting from OLSLR was 1.2 ± 1.3 (VF2-10) and 6.2 ± 6.6 (VF2-4) dB. The VBLR more accurately predicts future VF progression in glaucoma patients compared to conventional OLSLR, especially in short VF series. © ARVO.
Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P
1997-11-01
In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.
Robust sleep quality quantification method for a personal handheld device.
Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol
2014-06-01
The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.
Beat-to-beat heart rate estimation fusing multimodal video and sensor data
Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen
2015-01-01
Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference. PMID:26309754
Beat-to-beat heart rate estimation fusing multimodal video and sensor data.
Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen
2015-08-01
Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference.
Pyrometer with tracking balancing
NASA Astrophysics Data System (ADS)
Ponomarev, D. B.; Zakharenko, V. A.; Shkaev, A. G.
2018-04-01
Currently, one of the main metrological noncontact temperature measurement challenges is the emissivity uncertainty. This paper describes a pyrometer with emissivity effect diminishing through the use of a measuring scheme with tracking balancing in which the radiation receiver is a null-indicator. In this paper the results of the prototype pyrometer absolute error study in surfaces temperature measurement of aluminum and nickel samples are presented. There is absolute error calculated values comparison considering the emissivity table values with errors on the results of experimental measurements by the proposed method. The practical implementation of the proposed technical solution has allowed two times to reduce the error due to the emissivity uncertainty.
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Kenney, Terry A.
2010-01-01
Operational procedures at U.S. Geological Survey gaging stations include periodic leveling checks to ensure that gages are accurately set to the established gage datum. Differential leveling techniques are used to determine elevations for reference marks, reference points, all gages, and the water surface. The techniques presented in this manual provide guidance on instruments and methods that ensure gaging-station levels are run to both a high precision and accuracy. Levels are run at gaging stations whenever differences in gage readings are unresolved, stations may have been damaged, or according to a pre-determined frequency. Engineer's levels, both optical levels and electronic digital levels, are commonly used for gaging-station levels. Collimation tests should be run at least once a week for any week that levels are run, and the absolute value of the collimation error cannot exceed 0.003 foot/100 feet (ft). An acceptable set of gaging-station levels consists of a minimum of two foresights, each from a different instrument height, taken on at least two independent reference marks, all reference points, all gages, and the water surface. The initial instrument height is determined from another independent reference mark, known as the origin, or base reference mark. The absolute value of the closure error of a leveling circuit must be less than or equal to ft, where n is the total number of instrument setups, and may not exceed |0.015| ft regardless of the number of instrument setups. Closure error for a leveling circuit is distributed by instrument setup and adjusted elevations are determined. Side shots in a level circuit are assessed by examining the differences between the adjusted first and second elevations for each objective point in the circuit. The absolute value of these differences must be less than or equal to 0.005 ft. Final elevations for objective points are determined by averaging the valid adjusted first and second elevations. If final elevations indicate that the reference gage is off by |0.015| ft or more, it must be reset.
Sub-nanometer periodic nonlinearity error in absolute distance interferometers
NASA Astrophysics Data System (ADS)
Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang
2015-05-01
Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.
1984-05-01
Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ming; Cygler,
The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less
A review on Black-Scholes model in pricing warrants in Bursa Malaysia
NASA Astrophysics Data System (ADS)
Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul
2017-01-01
This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.
Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area
NASA Astrophysics Data System (ADS)
Alarifi, A. S.; Alarifi, N. S.
2009-12-01
Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis of earthquakes data in northern Red Sea area for different statistics parameters such as correlation, mean, standard deviation, and other. This analysis is to provide a deep understand of the Seismicity of the area, and existing patterns.
Noise-Enhanced Eversion Force Sense in Ankles With or Without Functional Instability.
Ross, Scott E; Linens, Shelley W; Wright, Cynthia J; Arnold, Brent L
2015-08-01
Force sense impairments are associated with functional ankle instability. Stochastic resonance stimulation (SRS) may have implications for correcting these force sense deficits. To determine if SRS improved force sense. Case-control study. Research laboratory. Twelve people with functional ankle instability (age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) and 12 people with stable ankles (age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg). The eversion force sense protocol required participants to reproduce a targeted muscle tension (10% of maximum voluntary isometric contraction). This protocol was assessed under SRSon and SRSoff (control) conditions. During SRSon, random subsensory mechanical noise was applied to the lower leg at a customized optimal intensity for each participant. Constant error, absolute error, and variable error measures quantified accuracy, overall performance, and consistency of force reproduction, respectively. With SRS, we observed main effects for force sense absolute error (SRSoff = 1.01 ± 0.67 N, SRSon = 0.69 ± 0.42 N) and variable error (SRSoff = 1.11 ± 0.64 N, SRSon = 0.78 ± 0.56 N) (P < .05). No other main effects or treatment-by-group interactions were found (P > .05). Although SRS reduced the overall magnitude (absolute error) and variability (variable error) of force sense errors, it had no effect on the directionality (constant error). Clinically, SRS may enhance muscle tension ability, which could have treatment implications for ankle stability.
NASA Astrophysics Data System (ADS)
Jung, Jae Hong; Jung, Joo-Young; Cho, Kwang Hwan; Ryu, Mi Ryeong; Bae, Sun Hyun; Moon, Seong Kwon; Kim, Yong Ho; Choe, Bo-Young; Suh, Tae Suk
2017-02-01
The purpose of this study was to analyze the glottis rotational error (GRE) by using a thermoplastic mask for patients with the glottic cancer undergoing intensity-modulated radiation therapy (IMRT). We selected 20 patients with glottic cancer who had received IMRT by using the tomotherapy. The image modalities with both kilovoltage computed tomography (planning kVCT) and megavoltage CT (daily MVCT) images were used for evaluating the error. Six anatomical landmarks in the image were defined to evaluate a correlation between the absolute GRE (°) and the length of contact with the underlying skin of the patient by the mask (mask, mm). We also statistically analyzed the results by using the Pearson's correlation coefficient and a linear regression analysis ( P <0.05). The mask and the absolute GRE were verified to have a statistical correlation ( P < 0.01). We found a statistical significance for each parameter in the linear regression analysis (mask versus absolute roll: P = 0.004 [ P < 0.05]; mask versus 3D-error: P = 0.000 [ P < 0.05]). The range of the 3D-errors with contact by the mask was from 1.2% - 39.7% between the maximumand no-contact case in this study. A thermoplastic mask with a tight, increased contact area may possibly contribute to the uncertainty of the reproducibility as a variation of the absolute GRE. Thus, we suggest that a modified mask, such as one that covers only the glottis area, can significantly reduce the patients' setup errors during the treatment.
NASA Astrophysics Data System (ADS)
Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.
NASA Astrophysics Data System (ADS)
Sánchez-Doblado, Francisco; Capote, Roberto; Leal, Antonio; Roselló, Joan V.; Lagares, Juan I.; Arráns, Rafael; Hartmann, Günther H.
2005-03-01
Intensity modulated radiotherapy (IMRT) has become a treatment of choice in many oncological institutions. Small fields or beamlets with sizes of 1 to 5 cm2 are now routinely used in IMRT delivery. Therefore small ionization chambers (IC) with sensitive volumes <=0.1 cm3are generally used for dose verification of an IMRT treatment. The measurement conditions during verification may be quite different from reference conditions normally encountered in clinical beam calibration, so dosimetry of these narrow photon beams pertains to the so-called non-reference conditions for beam calibration. This work aims at estimating the error made when measuring the organ at risk's (OAR) absolute dose by a micro ion chamber (μIC) in a typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. We have selected two clinical cases, treated by IMRT, for our dose error evaluations. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the dose measured by the chamber as a dose averaged over the air cavity within the ion-chamber active volume (Dair). The absorbed dose to water (Dwater) is derived as the dose deposited inside the same volume, in the same geometrical position, filled and surrounded by water in the absence of the ion chamber. Therefore, the Dwater/Dair dose ratio is the MC estimator of the total correction factor needed to convert the absorbed dose in air into the absorbed dose in water. The dose ratio was calculated for the μIC located at the isocentre within the OARs for both clinical cases. The clinical impact of the calculated dose error was found to be negligible for the studied IMRT treatments.
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya
2013-03-01
This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.
The PMA Catalogue: 420 million positions and absolute proper motions
NASA Astrophysics Data System (ADS)
Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.
2017-07-01
We present a catalogue that contains about 420 million absolute proper motions of stars. It was derived from the combination of positions from Gaia DR1 and 2MASS, with a mean difference of epochs of about 15 yr. Most of the systematic zonal errors inherent in the 2MASS Catalogue were eliminated before deriving the absolute proper motions. The absolute calibration procedure (zero-pointing of the proper motions) was carried out using about 1.6 million positions of extragalactic sources. The mean formal error of the absolute calibration is less than 0.35 mas yr-1. The derived proper motions cover the whole celestial sphere without gaps for a range of stellar magnitudes from 8 to 21 mag. In the sky areas where the extragalactic sources are invisible (the avoidance zone), a dedicated procedure was used that transforms the relative proper motions into absolute ones. The rms error of proper motions depends on stellar magnitude and ranges from 2-5 mas yr-1 for stars with 10 mag < G < 17 mag to 5-10 mas yr-1 for faint ones. The present catalogue contains the Gaia DR1 positions of stars for the J2015 epoch. The system of the PMA proper motions does not depend on the systematic errors of the 2MASS positions, and in the range from 14 to 21 mag represents an independent realization of a quasi-inertial reference frame in the optical and near-infrared wavelength range. The Catalogue also contains stellar magnitudes taken from the Gaia DR1 and 2MASS catalogues. A comparison of the PMA proper motions of stars with similar data from certain recent catalogues has been undertaken.
Akhtar, Saeed; Rozi, Shafquat
2009-01-01
AIM: To identify the stochastic autoregressive integrated moving average (ARIMA) model for short term forecasting of hepatitis C virus (HCV) seropositivity among volunteer blood donors in Karachi, Pakistan. METHODS: Ninety-six months (1998-2005) data on HCV seropositive cases (1000-1 × month-1) among male volunteer blood donors tested at four major blood banks in Karachi, Pakistan were subjected to ARIMA modeling. Subsequently, a fitted ARIMA model was used to forecast HCV seropositive donors for 91-96 mo to contrast with observed series of the same months. To assess the forecast accuracy, the mean absolute error rate (%) between the observed and predicted HCV seroprevalence was calculated. Finally, a fitted ARIMA model was used for short-term forecasts beyond the observed series. RESULTS: The goodness-of-fit test of the optimum ARIMA (2,1,7) model showed non-significant autocorrelations in the residuals of the model. The forecasts by ARIMA for 91-96 mo closely followed the pattern of observed series for the same months, with mean monthly absolute forecast errors (%) over 6 mo of 6.5%. The short-term forecasts beyond the observed series adequately captured the pattern in the data and showed increasing tendency of HCV seropositivity with a mean ± SD HCV seroprevalence (1000-1 × month-1) of 24.3 ± 1.4 over the forecast interval. CONCLUSION: To curtail HCV spread, public health authorities need to educate communities and health care providers about HCV transmission routes based on known HCV epidemiology in Pakistan and its neighboring countries. Future research may focus on factors associated with hyperendemic levels of HCV infection. PMID:19340903
NASA Astrophysics Data System (ADS)
Sahu, Neelesh Kumar; Andhare, Atul B.; Andhale, Sandip; Raju Abraham, Roja
2018-04-01
Present work deals with prediction of surface roughness using cutting parameters along with in-process measured cutting force and tool vibration (acceleration) during turning of Ti-6Al-4V with cubic boron nitride (CBN) inserts. Full factorial design is used for design of experiments using cutting speed, feed rate and depth of cut as design variables. Prediction model for surface roughness is developed using response surface methodology with cutting speed, feed rate, depth of cut, resultant cutting force and acceleration as control variables. Analysis of variance (ANOVA) is performed to find out significant terms in the model. Insignificant terms are removed after performing statistical test using backward elimination approach. Effect of each control variables on surface roughness is also studied. Correlation coefficient (R2 pred) of 99.4% shows that model correctly explains the experiment results and it behaves well even when adjustment is made in factors or new factors are added or eliminated. Validation of model is done with five fresh experiments and measured forces and acceleration values. Average absolute error between RSM model and experimental measured surface roughness is found to be 10.2%. Additionally, an artificial neural network model is also developed for prediction of surface roughness. The prediction results of modified regression model are compared with ANN. It is found that RSM model and ANN (average absolute error 7.5%) are predicting roughness with more than 90% accuracy. From the results obtained it is found that including cutting force and vibration for prediction of surface roughness gives better prediction than considering only cutting parameters. Also, ANN gives better prediction over RSM models.
Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun
2015-02-01
Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.
Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.
Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung
2017-01-01
The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System
NASA Technical Reports Server (NTRS)
Pfenninger, W. Matthew; Papen, George C.
1992-01-01
Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
All the adiabatic bound states of NO{sub 2}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salzgeber, R.F.; Mandelshtam, V.; Schlier, C.
1998-07-01
We calculated all 2967 even and odd bound states of the adiabatic ground state of NO{sub 2}, using a modification of the abthinspinitio potential energy surface of Leonardi {ital et al.} [J. Chem. Phys. {bold 105}, 9051 (1996)]. The calculation was performed by harmonic inversion of the Chebyshev correlation function generated by a DVR Hamiltonian in Radau coordinates. The relative error for the computed eigenenergies (measured from the potential minimum), is 10{sup {minus}4} or better, corresponding to an absolute error of less than about 2.5thinspcm{sup {minus}1}. Near the dissociation threshold the average density of states is about 0.2/cm{sup {minus}1} formore » each symmetry. Statistical analysis of the states shows some interesting structure of the rigidity parameter {Delta}{sub 3} as a function of energy. {copyright} {ital 1998 American Institute of Physics.}« less
Zhang, Tangtang; Wen, Jun; van der Velde, Rogier; Meng, Xianhong; Li, Zhenchao; Liu, Yuanyong; Liu, Rong
2008-01-01
The total atmospheric water vapor content (TAWV) and land surface temperature (LST) play important roles in meteorology, hydrology, ecology and some other disciplines. In this paper, the ENVISAT/AATSR (The Advanced Along-Track Scanning Radiometer) thermal data are used to estimate the TAWV and LST over the Loess Plateau in China by using a practical split window algorithm. The distribution of the TAWV is accord with that of the MODIS TAWV products, which indicates that the estimation of the total atmospheric water vapor content is reliable. Validations of the LST by comparing with the ground measurements indicate that the maximum absolute derivation, the maximum relative error and the average relative error is 4.0K, 11.8% and 5.0% respectively, which shows that the retrievals are believable; this algorithm can provide a new way to estimate the LST from AATSR data. PMID:27879795
Gade, Venkata; Allen, Jerome; Cole, Jeffrey L; Barrance, Peter J
2016-07-01
To characterize the ability of patients with symptomatic knee osteoarthritis (OA) to perform a weight-bearing activity compatible with upright magnetic resonance imaging (MRI) scanning and how this ability is affected by knee pain symptoms and flexion angles. Cross-sectional observational study assessing effects of knee flexion angle, pain level, and study sequence on accuracy and duration of performing a task used in weight-bearing MRI evaluation. Visual feedback of knee position from an MRI compatible sensor was provided. Pain levels were self-reported on a standardized scale. Simulated MRI setup in a research laboratory. Convenience sample of individuals (N=14; 9 women, 5 men; mean, 69±14y) with symptomatic knee OA. Not applicable. Averaged absolute and signed angle error from target knee flexion for each minute of trial and duration tolerance (the duration that subjects maintained position within a prescribed error threshold). Absolute targeting error increased at longer trial durations (P<.001). Duration tolerance decreased with increasing pain (mean ± SE, no pain: 3min 19s±11s; severe pain: 1min 49s±23s; P=.008). Study sequence affected duration tolerance (first knee: 3min 5s±9.1s; second knee: 2min 19s±9.7s; P=.015). The study provided evidence that weight-bearing MRI evaluations based on imaging protocols in the range of 2 to 3 minutes are compatible with patients reporting mild to moderate knee OA-related pain. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, X.; Zhang, C.; Li, W.
2017-12-01
Long-term spatiotemporal analysis and modeling of aerosol optical depth (AOD) distribution is of paramount importance to study radiative forcing, climate change, and human health. This study is focused on the trends and variations of AOD over six stations located in United States and China during 2003 to 2015, using satellite-retrieved Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 retrievals and ground measurements derived from Aerosol Robotic NETwork (AERONET). An autoregressive integrated moving average (ARIMA) model is applied to simulate and predict AOD values. The R2, adjusted R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Bayesian Information Criterion (BIC) are used as indices to select the best fitted model. Results show that there is a persistent decreasing trend in AOD for both MODIS data and AERONET data over three stations. Monthly and seasonal AOD variations reveal consistent aerosol patterns over stations along mid-latitudes. Regional differences impacted by climatology and land cover types are observed for the selected stations. Statistical validation of time series models indicates that the non-seasonal ARIMA model performs better for AERONET AOD data than for MODIS AOD data over most stations, suggesting the method works better for data with higher quality. By contrast, the seasonal ARIMA model reproduces the seasonal variations of MODIS AOD data much more precisely. Overall, the reasonably predicted results indicate the applicability and feasibility of the stochastic ARIMA modeling technique to forecast future and missing AOD values.
Tan, Ting; Chen, Lizhang; Liu, Fuqiang
2014-11-01
To establish multiple seasonal autoregressive integrated moving average model (ARIMA) according to the hand-foot-mouth disease incidence in Changsha, and to explore the feasibility of the multiple seasonal ARIMA in predicting the hand-foot-mouth disease incidence. EVIEWS 6.0 was used to establish multiple seasonal ARIMA according to the hand-foot- mouth disease incidence from May 2008 to August 2013 in Changsha, and the data of the hand- foot-mouth disease incidence from September 2013 to February 2014 were served as the examined samples of the multiple seasonal ARIMA, then the errors were compared between the forecasted incidence and the real value. Finally, the incidence of hand-foot-mouth disease from March 2014 to August 2014 was predicted by the model. After the data sequence was handled by smooth sequence, model identification and model diagnosis, the multiple seasonal ARIMA (1, 0, 1)×(0, 1, 1)12 was established. The R2 value of the model fitting degree was 0.81, the root mean square prediction error was 8.29 and the mean absolute error was 5.83. The multiple seasonal ARIMA is a good prediction model, and the fitting degree is good. It can provide reference for the prevention and control work in hand-foot-mouth disease.
NASA Astrophysics Data System (ADS)
Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim
2016-11-01
In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.
Predictability of the Arctic sea ice edge
NASA Astrophysics Data System (ADS)
Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.
2016-02-01
Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellefson, S; Department of Human Oncology, University of Wisconsin, Madison, WI; Culberson, W
Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions weremore » chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the University of Wisconsin Medical Radiation Research Center.« less
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
SU-E-T-261: Plan Quality Assurance of VMAT Using Fluence Images Reconstituted From Log-Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsuta, Y; Shimizu, E; Matsunaga, K
2014-06-01
Purpose: A successful VMAT plan delivery includes precise modulations of dose rate, gantry rotational and multi-leaf collimator (MLC) shapes. One of the main problem in the plan quality assurance is dosimetric errors associated with leaf-positional errors are difficult to analyze because they vary with MU delivered and leaf number. In this study, we calculated integrated fluence error image (IFEI) from log-files and evaluated plan quality in the area of all and individual MLC leaves scanned. Methods: The log-file reported the expected and actual position for inner 20 MLC leaves and the dose fraction every 0.25 seconds during prostate VMAT onmore » Elekta Synergy. These data were imported to in-house software that developed to calculate expected and actual fluence images from the difference of opposing leaf trajectories and dose fraction at each time. The IFEI was obtained by adding all of the absolute value of the difference between expected and actual fluence images corresponding. Results: In the area all MLC leaves scanned in the IFEI, the average and root mean square (rms) were 2.5 and 3.6 MU, the area of errors below 10, 5 and 3 MU were 98.5, 86.7 and 68.1 %, the 95 % of area was covered with less than error of 7.1 MU. In the area individual MLC leaves scanned in the IFEI, the average and rms value were 2.1 – 3.0 and 3.1 – 4.0 MU, the area of errors below 10, 5 and 3 MU were 97.6 – 99.5, 81.7 – 89.5 and 51.2 – 72.8 %, the 95 % of area was covered with less than error of 6.6 – 8.2 MU. Conclusion: The analysis of the IFEI reconstituted from log-file was provided detailed information about the delivery in the area of all and individual MLC leaves scanned.« less
Popov, I; Valašková, J; Štefaničková, J; Krásnik, V
2017-01-01
A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.
Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.
Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep
2017-06-12
Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Benchmark quality total atomization energies of small polyatomic molecules
NASA Astrophysics Data System (ADS)
Martin, Jan M. L.; Taylor, Peter R.
1997-05-01
Successive coupled-cluster [CCSD(T)] calculations in basis sets of spdf, spdfg, and spdfgh quality, combined with separate Schwartz-type extrapolations A+B/(l+1/2)α of the self-consistent field (SCF) and correlation energies, permit the calculations of molecular total atomization energies (TAEs) with a mean absolute error of as low as 0.12 kcal/mol. For the largest molecule treated, C2H4, we find ∑D0=532.0 kcal/mol, in perfect agreement with experiment. The aug-cc-pV5Z basis set recovers on average about 99% of the valence correlation contribution to the TAE, and essentially the entire SCF contribution.
NASA Technical Reports Server (NTRS)
Held, D.; Werner, C.; Wall, S.
1983-01-01
The absolute amplitude calibration of the spaceborne Seasat SAR data set is presented based on previous relative calibration studies. A scale factor making it possible to express the perceived radar brightness of a scene in units of sigma-zero is established. The system components are analyzed for error contribution, and the calibration techniques are introduced for each stage. These include: A/D converter saturation tests; prevention of clipping in the processing step; and converting the digital image into the units of received power. Experimental verification was performed by screening and processing the data of the lava flow surrounding the Pisgah Crater in Southern California, for which previous C-130 airborne scatterometer data were available. The average backscatter difference between the two data sets is estimated to be 2 dB in the brighter, and 4 dB in the dimmer regions. For the SAR a calculated uncertainty of 3 dB is expected.
Surveying implicit solvent models for estimating small molecule absolute hydration free energies
Knight, Jennifer L.
2011-01-01
Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452
Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s
Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A
2004-01-01
Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M1) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6–4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database. PMID:15447691
An Alternative Time Metric to Modified Tau for Unmanned Aircraft System Detect And Avoid
NASA Technical Reports Server (NTRS)
Wu, Minghong G.; Bageshwar, Vibhor L.; Euteneuer, Eric A.
2017-01-01
A new horizontal time metric, Time to Protected Zone, is proposed for use in the Detect and Avoid (DAA) Systems equipped by unmanned aircraft systems (UAS). This time metric has three advantages over the currently adopted time metric, modified tau: it corresponds to a physical event, it is linear with time, and it can be directly used to prioritize intruding aircraft. The protected zone defines an area around the UAS that can be a function of each intruding aircraft's surveillance measurement errors. Even with its advantages, the Time to Protected Zone depends explicitly on encounter geometry and may be more sensitive to surveillance sensor errors than modified tau. To quantify its sensitivity, simulation of 972 encounters using realistic sensor models and a proprietary fusion tracker is performed. Two sensitivity metrics, the probability of time reversal and the average absolute time error, are computed for both the Time to Protected Zone and modified tau. Results show that the sensitivity of the Time to Protected Zone is comparable to that of modified tau if the dimensions of the protected zone are adequately defined.
Performance appraisal of VAS radiometry for GOES-4, -5 and -6
NASA Technical Reports Server (NTRS)
Chesters, D.; Robinson, W. D.
1983-01-01
The first three VISSR Atmospheric Sounders (VAS) were launched on GOES-4, -5, and -6 in 1980, 1981 and 1983. Postlaunch radiometric performance is assessed for noise, biases, registration and reliability, with special attention to calibration and problems in the data processing chain. The postlaunch performance of the VAS radiometer meets its prelaunch design specifications, particularly those related to image formation and noise reduction. The best instrument is carried on GOES-5, currently operational as GOES-EAST. Single sample noise is lower than expected, especially for the small longwave and large shortwave detectors. Detector to detector offsets are correctable to within the resolution limits of the instrument. Truncation, zero point and droop errors are insignificant. Absolute calibration errors, estimated from HIRS and from radiation transfer calculations, indicate moderate, but stable biases. Relative calibration errors from scanline to scanline are noticeable, but meet sounding requirements for temporarily and spatially averaged sounding fields of view. The VAS instrument is a potentially useful radiometer for mesoscale sounding operations. Image quality is very good. Soundings derived from quality controlled data meet prelaunch requirements when calculated with noise and bias resistant algorithms.
Hong, KyungPyo; Jeong, Eun-Kee; Wall, T. Scott; Drakos, Stavros G.; Kim, Daniel
2015-01-01
Purpose To develop and evaluate a wideband arrhythmia-insensitive-rapid (AIR) pulse sequence for cardiac T1 mapping without image artifacts induced by implantable-cardioverter-defibrillator (ICD). Methods We developed a wideband AIR pulse sequence by incorporating a saturation pulse with wide frequency bandwidth (8.9 kHz), in order to achieve uniform T1 weighting in the heart with ICD. We tested the performance of original and “wideband” AIR cardiac T1 mapping pulse sequences in phantom and human experiments at 1.5T. Results In 5 phantoms representing native myocardium and blood and post-contrast blood/tissue T1 values, compared with the control T1 values measured with an inversion-recovery pulse sequence without ICD, T1 values measured with original AIR with ICD were considerably lower (absolute percent error >29%), whereas T1 values measured with wideband AIR with ICD were similar (absolute percent error <5%). Similarly, in 11 human subjects, compared with the control T1 values measured with original AIR without ICD, T1 measured with original AIR with ICD was significantly lower (absolute percent error >10.1%), whereas T1 measured with wideband AIR with ICD was similar (absolute percent error <2.0%). Conclusion This study demonstrates the feasibility of a wideband pulse sequence for cardiac T1 mapping without significant image artifacts induced by ICD. PMID:25975192
Absolute color scale for improved diagnostics with wavefront error mapping.
Smolek, Michael K; Klyce, Stephen D
2007-11-01
Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.
Reliability study of biometrics "do not contact" in myopia.
Migliorini, R; Fratipietro, M; Comberiati, A M; Pattavina, L; Arrico, L
The aim of the study is a comparison between the actually achieved after surgery condition versus the expected refractive condition of the eye as calculated via a biometer. The study was conducted in a random group of 38 eyes of patients undergoing surgery by phacoemulsification. The mean absolute error was calculated between the predicted values from the measurements with the optical biometer and those obtained in the post-operative error which was at around 0.47% Our study shows results not far from those reported in the literature, and in relation, to the mean absolute error is among the lowest values at 0.47 ± 0.11 SEM.
Comparison of Dst Forecast Models for Intense Geomagnetic Storms
NASA Technical Reports Server (NTRS)
Ji, Eun-Young; Moon, Y.-J.; Gopalswamy, N.; Lee, D.-H.
2012-01-01
We have compared six disturbance storm time (Dst) forecast models using 63 intense geomagnetic storms (Dst <=100 nT) that occurred from 1998 to 2006. For comparison, we estimated linear correlation coefficients and RMS errors between the observed Dst data and the predicted Dst during the geomagnetic storm period as well as the difference of the value of minimum Dst (Delta Dst(sub min)) and the difference in the absolute value of Dst minimum time (Delta t(sub Dst)) between the observed and the predicted. As a result, we found that the model by Temerin and Li gives the best prediction for all parameters when all 63 events are considered. The model gives the average values: the linear correlation coefficient of 0.94, the RMS error of 14.8 nT, the Delta Dst(sub min) of 7.7 nT, and the absolute value of Delta t(sub Dst) of 1.5 hour. For further comparison, we classified the storm events into two groups according to the magnitude of Dst. We found that the model of Temerin and Lee is better than the other models for the events having 100 <= Dst < 200 nT, and three recent models (the model of Wang et al., the model of Temerin and Li, and the model of Boynton et al.) are better than the other three models for the events having Dst <= 200 nT.
Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho
2018-05-11
Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.
VizieR Online Data Catalog: WISE/NEOWISE Mars-crossing asteroids (Ali-Lagoa+, 2017)
NASA Astrophysics Data System (ADS)
Ali-Lagoa, V.; Delbo, M.
2017-07-01
We fitted the near-Earth asteroid thermal model of Harris (1998, Icarus, 131, 29) to WISE/NEOWISE thermal infrared data (see, e.g., Mainzer et al. 2011ApJ...736..100M, and Masiero et al. 2014, Cat. J/ApJ/791/121). The table contains the best-fitting values of size and beaming parameter. We note that the beaming parameter is a strictly positive quantity, but a negative sign is given to indicate whenever we could not fit it and had to assume a default value. We also provide the visible geometric albedos computed from the diameter and the tabulated absolute magnitudes. Minimum relative errors of 10, 15, and 20 percent should be considered for size, beaming parameter and albedo in those cases for which the beaming parameter could be fitted. Otherwise, the minimum relative errors in size and albedo increase to 20 and 40 percent (see, e.g., Mainzer et al. 2011ApJ...736..100M). The asteroid absolute magnitudes and slope parameters retrieved from the Minor Planet Center (MPC) are included, as well as the number of observations used in each WISE band (nW2, nW3, nW4) and the corresponding average values of heliocentric and geocentric distances and phase angle of the observations. The ephemerides were retrieved from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).
Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Renhui
2016-08-01
Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors with a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jie, E-mail: yangjie396768@163.com; School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044; Liu, Qingquan
Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors withmore » a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.« less
Automated estimation of abdominal effective diameter for body size normalization of CT dose.
Cheng, Phillip M
2013-06-01
Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.
Karunaratne, Nicholas
2013-12-01
To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Smith, Erik A.; Kiesling, Richard L.; Ziegeweid, Jeffrey R.
2017-07-20
Fish habitat can degrade in many lakes due to summer blue-green algal blooms. Predictive models are needed to better manage and mitigate loss of fish habitat due to these changes. The U.S. Geological Survey (USGS), in cooperation with the Minnesota Department of Natural Resources, developed predictive water-quality models for two agricultural land-use dominated lakes in Minnesota—Madison Lake and Pearl Lake, which are part of Minnesota’s sentinel lakes monitoring program—to assess algal community dynamics, water quality, and fish habitat suitability of these two lakes under recent (2014) meteorological conditions. The interaction of basin processes to these two lakes, through the delivery of nutrient loads, were simulated using CE-QUAL-W2, a carbon-based, laterally averaged, two-dimensional water-quality model that predicts distribution of temperature and oxygen from interactions between nutrient cycling, primary production, and trophic dynamics.The CE-QUAL-W2 models successfully predicted water temperature and dissolved oxygen on the basis of the two metrics of mean absolute error and root mean square error. For Madison Lake, the mean absolute error and root mean square error were 0.53 and 0.68 degree Celsius, respectively, for the vertical temperature profile comparisons; for Pearl Lake, the mean absolute error and root mean square error were 0.71 and 0.95 degree Celsius, respectively, for the vertical temperature profile comparisons. Temperature and dissolved oxygen were key metrics for calibration targets. These calibrated lake models also simulated algal community dynamics and water quality. The model simulations presented potential explanations for persistently large total phosphorus concentrations in Madison Lake, key differences in nutrient concentrations between these lakes, and summer blue-green algal bloom persistence.Fish habitat suitability simulations for cool-water and warm-water fish indicated that, in general, both lakes contained a large proportion of good-growth habitat and a sustained period of optimal growth habitat in the summer, without any periods of lethal oxythermal habitat. For Madison and Pearl Lakes, examples of important cool-water fish, particularly game fish, include northern pike (Esox lucius), walleye (Sander vitreus), and black crappie (Pomoxis nigromaculatus); examples of important warm-water fish include bluegill (Lepomis macrochirus), largemouth bass (Micropterus salmoides), and smallmouth bass (Micropterus dolomieu). Sensitivity analyses were completed to understand lake response effects through the use of controlled departures on certain calibrated model parameters and input nutrient loads. These sensitivity analyses also operated as land-use change scenarios because alterations in agricultural practices, for example, could potentially increase or decrease nutrient loads.
NASA Astrophysics Data System (ADS)
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
Lee, Seung-Jong; Kim, Euiseong
2012-08-01
The maintenance of the healthy periodontal ligament cells of the root surface of donor tooth and intimate surface contact between the donor tooth and the recipient bone are the key factors for successful tooth transplantation. In order to achieve these purposes, a duplicated donor tooth model can be utilized to reduce the extra-oral time using the computer-aided rapid prototyping (CARP) technique. Briefly, a three-dimensional digital imaging and communication in medicine (DICOM) image with the real dimensions of the donor tooth was obtained from a computed tomography (CT), and a life-sized resin tooth model was fabricated. Dimensional errors between real tooth, 3D CT image model and CARP model were calculated. And extra-oral time was recorded during the autotransplantation of the teeth. The average extra-oral time was 7 min 25 sec with the range of immediate to 25 min in cases which extra-oral root canal treatments were not performed while it was 9 min 15 sec when extra-oral root canal treatments were performed. The average radiographic distance between the root surface and the alveolar bone was 1.17 mm and 1.35 mm at mesial cervix and apex; they were 0.98 mm and 1.26 mm at the distal cervix and apex. When the dimensional errors between real tooth, 3D CT image model and CARP model were measured in cadavers, the average of absolute error was 0.291 mm between real teeth and CARP model. These data indicate that CARP may be of value in minimizing the extra-oral time and the gap between the donor tooth and the recipient alveolar bone in tooth transplantation.
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2014-12-04
The aim of this study is to evaluate the ability of transit dosimetry using commercial treatment planning system (TPS) and an electronic portal imaging device (EPID) with simple calibration method to verify the beam delivery based on detection of large errors in treatment room. Twenty four fields of intensity modulated radiotherapy (IMRT) plans were selected from four lung cancer patients and used in the irradiation of an anthropomorphic phantom. The proposed method was evaluated by comparing the calculated dose map from TPS and EPID measurement on the same plane using a gamma index method with a 3% dose and 3 mm distance-to-dose agreement tolerance limit. In a simulation using a homogeneous plastic water phantom, performed to verify the effectiveness of the proposed method, the average passing rate of the transit dose based on gamma index was high enough, averaging 94.2% when there was no error during beam delivery. The passing rate of the transit dose for 24 IMRT fields was lower with the anthropomorphic phantom, averaging 86.8% ± 3.8%, a reduction partially due to the inaccuracy of TPS calculations for inhomogeneity. Compared with the TPS, the absolute value of the transit dose at the beam center differed by -0.38% ± 2.1%. The simulation study indicated that the passing rate of the gamma index was significantly reduced, to less than 40%, when a wrong field was erroneously irradiated to patient in the treatment room. This feasibility study suggested that transit dosimetry based on the calculation with commercial TPS and EPID measurement with simple calibration can provide information about large errors for treatment beam delivery.
NASA Technical Reports Server (NTRS)
Keitz, J. F.
1982-01-01
The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Dental age estimation in Japanese individuals combining permanent teeth and third molars.
Ramanan, Namratha; Thevissen, Patrick; Fleuws, Steffen; Willems, G
2012-12-01
The study aim was, firstly, to verify the Willems et al. model on a Japanese reference sample. Secondly to develop a Japanese reference model based on the Willems et al. method and to verify it. Thirdly to analyze the age prediction performance adding tooth development information of third molars to permanent teeth. Retrospectively 1877 panoramic radiographs were selected in the age range between 1 and 23 years (1248 children, 629 sub-adults). Dental development was registered applying Demirjian 's stages of the mandibular left permanent teeth in children and Köhler stages on the third molars. The children's data were, firstly, used to validate the Willems et al. model (developed a Belgian reference sample), secondly, split ino a training and a test sample. On the training sample a Japanese reference model was developed based on the Willems method. The developed model and the Willems et al; model were verified on the test sample. Regression analysis was used to detect the age prediction performance adding third molar scores to permanent tooth scores. The validated Willems et al. model provided a mean absolute error of 0.85 and 0.75 years in females and males, respectively. The mean absolute error in the verified Willems et al. and the developed Japanese reference model was 0.85, 0.77 and 0.79, 0.75 years in females and males, respectively. On average a negligible change in root mean square error values was detected adding third molar scores to permanent teeth scores. The Belgian sample could be used as a reference model to estimate the age of the Japanese individuals. Combining information from the third molars and permanent teeth was not providing clinically significant improvement of age predictions based on permanent teeth information alone.
Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David
2015-01-01
Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778
Time series model for forecasting the number of new admission inpatients.
Zhou, Lingling; Zhao, Ping; Wu, Dongdong; Cheng, Cheng; Huang, Hao
2018-06-15
Hospital crowding is a rising problem, effective predicting and detecting managment can helpful to reduce crowding. Our team has successfully proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in the schistosomiasis and hand, foot, and mouth disease forecasting study. In this paper, our aim is to explore the application of the hybrid ARIMA-NARNN model to track the trends of the new admission inpatients, which provides a methodological basis for reducing crowding. We used the single seasonal ARIMA (SARIMA), NARNN and the hybrid SARIMA-NARNN model to fit and forecast the monthly and daily number of new admission inpatients. The root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to compare the forecasting performance among the three models. The modeling time range of monthly data included was from January 2010 to June 2016, July to October 2016 as the corresponding testing data set. The daily modeling data set was from January 4 to September 4, 2016, while the testing time range included was from September 5 to October 2, 2016. For the monthly data, the modeling RMSE and the testing RMSE, MAE and MAPE of SARIMA-NARNN model were less than those obtained from the single SARIMA or NARNN model, but the MAE and MAPE of modeling performance of SARIMA-NARNN model did not improve. For the daily data, all RMSE, MAE and MAPE of NARNN model were the lowest both in modeling stage and testing stage. Hybrid model does not necessarily outperform its constituents' performances. It is worth attempting to explore the reliable model to forecast the number of new admission inpatients from different data.
A hybrid SVM-FFA method for prediction of monthly mean global solar radiation
NASA Astrophysics Data System (ADS)
Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Zamani, Mazdak; Motamedi, Shervin; Ch, Sudheer
2016-07-01
In this study, a hybrid support vector machine-firefly optimization algorithm (SVM-FFA) model is proposed to estimate monthly mean horizontal global solar radiation (HGSR). The merit of SVM-FFA is assessed statistically by comparing its performance with three previously used approaches. Using each approach and long-term measured HGSR, three models are calibrated by considering different sets of meteorological parameters measured for Bandar Abbass situated in Iran. It is found that the model (3) utilizing the combination of relative sunshine duration, difference between maximum and minimum temperatures, relative humidity, water vapor pressure, average temperature, and extraterrestrial solar radiation shows superior performance based upon all approaches. Moreover, the extraterrestrial radiation is introduced as a significant parameter to accurately estimate the global solar radiation. The survey results reveal that the developed SVM-FFA approach is greatly capable to provide favorable predictions with significantly higher precision than other examined techniques. For the SVM-FFA (3), the statistical indicators of mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and coefficient of determination ( R 2) are 3.3252 %, 0.1859 kWh/m2, 3.7350 %, and 0.9737, respectively which according to the RRMSE has an excellent performance. As a more evaluation of SVM-FFA (3), the ratio of estimated to measured values is computed and found that 47 out of 48 months considered as testing data fall between 0.90 and 1.10. Also, by performing a further verification, it is concluded that SVM-FFA (3) offers absolute superiority over the empirical models using relatively similar input parameters. In a nutshell, the hybrid SVM-FFA approach would be considered highly efficient to estimate the HGSR.
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
NASA Astrophysics Data System (ADS)
Zempila, Melina-Maria; Taylor, Michael; Bais, Alkiviadis; Kazadzis, Stelios
2016-10-01
We report on the construction of generic models to calculate photosynthetically active radiation (PAR) from global horizontal irradiance (GHI), and vice versa. Our study took place at stations of the Greek UV network (UVNET) and the Hellenic solar energy network (HNSE) with measurements from NILU-UV multi-filter radiometers and CM pyranometers, chosen due to their long (≈1 M record/site) high temporal resolution (≈1 min) record that captures a broad range of atmospheric environments and cloudiness conditions. The uncertainty of the PAR measurements is quantified to be ±6.5% while the uncertainty involved in GHI measurements is up to ≈±7% according to the manufacturer. We show how multi-linear regression and nonlinear neural network (NN) models, trained at a calibration site (Thessaloniki) can be made generic provided that the input-output time series are processed with multi-channel singular spectrum analysis (M-SSA). Without M-SSA, both linear and nonlinear models perform well only locally. M-SSA with 50 time-lags is found to be sufficient for identification of trend, periodic and noise components in aerosol, cloud parameters and irradiance, and to construct regularized noise models of PAR from GHI irradiances. Reconstructed PAR and GHI time series capture ≈95% of the variance of the cross-validated target measurements and have median absolute percentage errors <2%. The intra-site median absolute error of M-SSA processed models were ≈8.2±1.7 W/m2 for PAR and ≈9.2±4.2 W/m2 for GHI. When applying the models trained at Thessaloniki to other stations, the average absolute mean bias between the model estimates and measured values was found to be ≈1.2 W/m2 for PAR and ≈0.8 W/m2 for GHI. For the models, percentage errors are well within the uncertainty of the measurements at all sites. Generic NN models were found to perform marginally better than their linear counterparts.
NASA Astrophysics Data System (ADS)
Opshaug, Guttorm Ringstad
There are times and places where conventional navigation systems, such as the Global Positioning System (GPS), are unavailable due to anything from temporary signal occultations to lack of navigation system infrastructure altogether. The goal of the Leapfrog Navigation System (LNS) is to provide localized positioning services for such cases. The concept behind leapfrog navigation is to advance a group of navigation units teamwise into an area of interest. In a practical 2-D case, leapfrogging assumes known initial positions of at least two currently stationary navigation units. Two or more mobile units can then start to advance into the area of interest. The positions of the mobiles are constantly being calculated based on cross-range distance measurements to the stationary units, as well as cross-ranges among the mobiles themselves. At some point the mobile units stop, and the stationary units are released to move. This second team of units (now mobile) can then overtake the first team (now stationary) and travel even further towards the common goal of the group. Since there always is one stationary team, the position of any unit can be referenced back to the initial positions. Thus, LNS provides absolute positioning. I developed navigation algorithms needed to solve leapfrog positions based on cross-range measurements. I used statistical tools to predict how position errors would grow as a function of navigation unit geometry, cross-range measurement accuracy and previous position errors. Using this knowledge I predicted that a 4-unit Leapfrog Navigation System using 100 m baselines and 200 m leap distances could travel almost 15 km before accumulating absolute position errors of 10 m (1sigma). Finally, I built a prototype leapfrog navigation system using 4 GPS transceiver ranging units. I placed the 4 units in the vertices a 10m x 10m square, and leapfrogged the group 20 meters forwards, and then back again (40 m total travel). Average horizontal RMS position errors never exceeded 16 cm during these field tests.
An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida
Gain, W.S.
1997-01-01
Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Chou, M.-D.; Khairoutdinov, M.; Barker, H. W.; Cahalan, R. F.
2003-01-01
We test the performance of the shortwave (SW) and longwave (LW) Column Radiation Models (CORAMs) of Chou and collaborators with heterogeneous cloud fields from a global single-day dataset produced by NCAR's Community Atmospheric Model with a 2-D CRM installed in each gridbox. The original SW version of the CORAM performs quite well compared to reference Independent Column Approximation (ICA) calculations for boundary fluxes, largely due to the success of a combined overlap and cloud scaling parameterization scheme. The absolute magnitude of errors relative to ICA are even smaller for the LW CORAM which applies similar overlap. The vertical distribution of heating and cooling within the atmosphere is also simulated quite well with daily-averaged zonal errors always below 0.3 K/d for SW heating rates and 0.6 K/d for LW cooling rates. The SW CORAM's performance improves by introducing a scheme that accounts for cloud inhomogeneity. These results suggest that previous studies demonstrating the inaccuracy of plane-parallel models may have unfairly focused on worst scenario cases, and that current radiative transfer algorithms of General Circulation Models (GCMs) may be more capable than previously thought in estimating realistic spatial and temporal averages of radiative fluxes, as long as they are provided with correct mean cloud profiles. However, even if the errors of the particular CORAMs are small, they seem to be systematic, and the impact of the biases can be fully assessed only with GCM climate simulations.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications
NASA Technical Reports Server (NTRS)
Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.
2006-01-01
The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.
A quantitative evaluation of the three dimensional reconstruction of patients' coronary arteries.
Klein, J L; Hoff, J G; Peifer, J W; Folks, R; Cooke, C D; King, S B; Garcia, E V
1998-04-01
Through extensive training and experience angiographers learn to mentally reconstruct the three dimensional (3D) relationships of the coronary arterial branches. Graphic computer technology can assist angiographers to more quickly visualize the coronary 3D structure from limited initial views and then help to determine additional helpful views by predicting subsequent angiograms before they are obtained. A new computer method for facilitating 3D reconstruction and visualization of human coronary arteries was evaluated by reconstructing biplane left coronary angiograms from 30 patients. The accuracy of the reconstruction was assessed in two ways: 1) by comparing the vessel's centerlines of the actual angiograms with the centerlines of a 2D projection of the 3D model projected into the exact angle of the actual angiogram; and 2) by comparing two 3D models generated by different simultaneous pairs on angiograms. The inter- and intraobserver variability of reconstruction were evaluated by mathematically comparing the 3D model centerlines of repeated reconstructions. The average absolute corrected displacement of 14,662 vessel centerline points in 2D from 30 patients was 1.64 +/- 2.26 mm. The average corrected absolute displacement of 3D models generated from different biplane pairs was 7.08 +/- 3.21 mm. The intraobserver variability of absolute 3D corrected displacement was 5.22 +/- 3.39 mm. The interobserver variability was 6.6 +/- 3.1 mm. The centerline analyses show that the reconstruction algorithm is mathematically accurate and reproducible. The figures presented in this report put these measurement errors into clinical perspective showing that they yield an accurate representation of the clinically relevant information seen on the actual angiograms. These data show that this technique can be clinically useful by accurately displaying in three dimensions the complex relationships of the branches of the coronary arterial tree.
A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus
NASA Astrophysics Data System (ADS)
Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.
2017-11-01
The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.
Detection and Dynamic Analysis of Space Debris in the Geo Ring
NASA Astrophysics Data System (ADS)
Lacruz, E.; Abad, C.; Downes, J. J.; Casanova, D.; Tresaco, E.
2018-01-01
There are different populations of space debris (SD) in the geostationary (GEO) region. It is of great interest to know their dynamics, in order to contribute to aspects such as alerts against possible collisions, repositioning of GEO satellites or placing those satellites that come into service. In this contribution we present a study about the detection and dynamic analysis of SD located in the GEO ring. Using the telescopes of the Venezuelan Obseratory National (VON), a large amount of astrometric observations have been acquired. A preliminary dynamic analysis of them has been carried out, which evidences the average relative motion of these orbiters with a mean absolute error for coordinates of ≍ 0.09 pix.
Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)
NASA Technical Reports Server (NTRS)
Geist, J.
1972-01-01
A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.
Grierson, Lawrence E M; Roberts, James W; Welsher, Arthur M
2017-05-01
There is much evidence to suggest that skill learning is enhanced by skill observation. Recent research on this phenomenon indicates a benefit of observing variable/erred demonstrations. In this study, we explore whether it is variability within the relative organization or absolute parameterization of a movement that facilitates skill learning through observation. To do so, participants were randomly allocated into groups that observed a model with no variability, absolute timing variability, relative timing variability, or variability in both absolute and relative timing. All participants performed a four-segment movement pattern with specific absolute and relative timing goals prior to and following the observational intervention, as well as in a 24h retention test and transfers tests that featured new relative and absolute timing goals. Absolute timing error indicated that all groups initially acquired the absolute timing, maintained their performance at 24h retention, and exhibited performance deterioration in both transfer tests. Relative timing error revealed that the observation of no variability and relative timing variability produced greater performance at the post-test, 24h retention and relative timing transfer tests, but for the no variability group, deteriorated at absolute timing transfer test. The results suggest that the learning of absolute timing following observation unfolds irrespective of model variability. However, the learning of relative timing benefits from holding the absolute features constant, while the observation of no variability partially fails in transfer. We suggest learning by observing no variability and variable/erred models unfolds via similar neural mechanisms, although the latter benefits from the additional coding of information pertaining to movements that require a correction. Copyright © 2017 Elsevier B.V. All rights reserved.
Liu, Zun-lei; Yuan, Xing-wei; Yang, Lin-lin; Yan, Li-ping; Zhang, Hui; Cheng, Jia-hua
2015-02-01
Multiple hypotheses are available to explain recruitment rate. Model selection methods can be used to identify the best model that supports a particular hypothesis. However, using a single model for estimating recruitment success is often inadequate for overexploited population because of high model uncertainty. In this study, stock-recruitment data of small yellow croaker in the East China Sea collected from fishery dependent and independent surveys between 1992 and 2012 were used to examine density-dependent effects on recruitment success. Model selection methods based on frequentist (AIC, maximum adjusted R2 and P-values) and Bayesian (Bayesian model averaging, BMA) methods were applied to identify the relationship between recruitment and environment conditions. Interannual variability of the East China Sea environment was indicated by sea surface temperature ( SST) , meridional wind stress (MWS), zonal wind stress (ZWS), sea surface pressure (SPP) and runoff of Changjiang River ( RCR). Mean absolute error, mean squared predictive error and continuous ranked probability score were calculated to evaluate the predictive performance of recruitment success. The results showed that models structures were not consistent based on three kinds of model selection methods, predictive variables of models were spawning abundance and MWS by AIC, spawning abundance by P-values, spawning abundance, MWS and RCR by maximum adjusted R2. The recruitment success decreased linearly with stock abundance (P < 0.01), suggesting overcompensation effect in the recruitment success might be due to cannibalism or food competition. Meridional wind intensity showed marginally significant and positive effects on the recruitment success (P = 0.06), while runoff of Changjiang River showed a marginally negative effect (P = 0.07). Based on mean absolute error and continuous ranked probability score, predictive error associated with models obtained from BMA was the smallest amongst different approaches, while that from models selected based on the P-value of the independent variables was the highest. However, mean squared predictive error from models selected based on the maximum adjusted R2 was highest. We found that BMA method could improve the prediction of recruitment success, derive more accurate prediction interval and quantitatively evaluate model uncertainty.
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
New method of extracting information of arterial oxygen saturation based on ∑ | 𝚫 |
NASA Astrophysics Data System (ADS)
Dai, Wenting; Lin, Ling; Li, Gang
2017-04-01
Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.
New method of extracting information of arterial oxygen saturation based on ∑|𝚫 |
NASA Astrophysics Data System (ADS)
Wenting, Dai; Ling, Lin; Gang, Li
2017-04-01
Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.
Prediction of boiling points of organic compounds by QSPR tools.
Dai, Yi-min; Zhu, Zhi-ping; Cao, Zhong; Zhang, Yue-fei; Zeng, Ju-lan; Li, Xun
2013-07-01
The novel electro-negativity topological descriptors of YC, WC were derived from molecular structure by equilibrium electro-negativity of atom and relative bond length of molecule. The quantitative structure-property relationships (QSPR) between descriptors of YC, WC as well as path number parameter P3 and the normal boiling points of 80 alkanes, 65 unsaturated hydrocarbons and 70 alcohols were obtained separately. The high-quality prediction models were evidenced by coefficient of determination (R(2)), the standard error (S), average absolute errors (AAE) and predictive parameters (Qext(2),RCV(2),Rm(2)). According to the regression equations, the influences of the length of carbon backbone, the size, the degree of branching of a molecule and the role of functional groups on the normal boiling point were analyzed. Comparison results with reference models demonstrated that novel topological descriptors based on the equilibrium electro-negativity of atom and the relative bond length were useful molecular descriptors for predicting the normal boiling points of organic compounds. Copyright © 2013 Elsevier Inc. All rights reserved.
Accuracy assessment of TanDEM-X IDEM using airborne LiDAR on the area of Poland
NASA Astrophysics Data System (ADS)
Woroszkiewicz, Małgorzata; Ewiak, Ireneusz; Lulkowska, Paulina
2017-06-01
The TerraSAR-X add-on for Digital Elevation Measurement (TanDEM-X) mission launched in 2010 is another programme - after the Shuttle Radar Topography Mission (SRTM) in 2000 - that uses space-borne radar interferometry to build a global digital surface model. This article presents the accuracy assessment of the TanDEM-X intermediate Digital Elevation Model (IDEM) provided by the German Aerospace Center (DLR) under the project "Accuracy assessment of a Digital Elevation Model based on TanDEM-X data" for the southwestern territory of Poland. The study area included: open terrain, urban terrain and forested terrain. Based on a set of 17,498 reference points acquired by airborne laser scanning, the mean errors of average heights and standard deviations were calculated for areas with a terrain slope below 2 degrees, between 2 and 6 degrees and above 6 degrees. The absolute accuracy of the IDEM data for the analysed area, expressed as a root mean square error (Total RMSE), was 0.77 m.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
NASA Astrophysics Data System (ADS)
Cai, Jun; Wang, Kuaishe; Shi, Jiamin; Wang, Wen; Liu, Yingying
2018-01-01
Constitutive analysis for hot working of BFe10-1-2 alloy was carried out by using experimental stress-strain data from isothermal hot compression tests, in a wide range of temperature of 1,023 1,273 K, and strain rate range of 0.001 10 s-1. A constitutive equation based on modified double multiple nonlinear regression was proposed considering the independent effects of strain, strain rate, temperature and their interrelation. The predicted flow stress data calculated from the developed equation was compared with the experimental data. Correlation coefficient (R), average absolute relative error (AARE) and relative errors were introduced to verify the validity of the developed constitutive equation. Subsequently, a comparative study was made on the capability of strain-compensated Arrhenius-type constitutive model. The results showed that the developed constitutive equation based on modified double multiple nonlinear regression could predict flow stress of BFe10-1-2 alloy with good correlation and generalization.
Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network
NASA Astrophysics Data System (ADS)
Dasgupta, Hirak
2016-12-01
The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.
Road traffic accidents prediction modelling: An analysis of Anambra State, Nigeria.
Ihueze, Chukwutoo C; Onwurah, Uchendu O
2018-03-01
One of the major problems in the world today is the rate of road traffic crashes and deaths on our roads. Majority of these deaths occur in low-and-middle income countries including Nigeria. This study analyzed road traffic crashes in Anambra State, Nigeria with the intention of developing accurate predictive models for forecasting crash frequency in the State using autoregressive integrated moving average (ARIMA) and autoregressive integrated moving average with explanatory variables (ARIMAX) modelling techniques. The result showed that ARIMAX model outperformed the ARIMA (1,1,1) model generated when their performances were compared using the lower Bayesian information criterion, mean absolute percentage error, root mean square error; and higher coefficient of determination (R-Squared) as accuracy measures. The findings of this study reveal that incorporating human, vehicle and environmental related factors in time series analysis of crash dataset produces a more robust predictive model than solely using aggregated crash count. This study contributes to the body of knowledge on road traffic safety and provides an approach to forecasting using many human, vehicle and environmental factors. The recommendations made in this study if applied will help in reducing the number of road traffic crashes in Nigeria. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
VizieR Online Data Catalog: Yale Trigonometric Parallaxes Preliminary (van Altena+ 1991)
NASA Astrophysics Data System (ADS)
van Altena, W. F.; Lee, J. T.; Hoffleit, D.
1995-10-01
The preliminary edition of the General Catalogue of Trigonometric Stellar Parallaxes, containing 15349 parallaxes for 7879 stars, has been prepared at the Yale University Observatory. In this edition 1480 stars have been added to those contained in the previous edition of the catalog by Jenkins (1952, 1963). This relatively small increase in the number of stars is more than compensated for by the increased accuracy of the newer trigonometric parallaxes. The authors have attempted to include here all trigonometric parallaxes made available to them by March 1991 and will provide for each listed parallax in the final version the reference to its source of publication. For each star it lists the equatorial coordinates for B1900 and the secular variation for 100 years, the proper motion in x and y, the weighted average absolute parallax and its standard error, the number of parallax observations, the quality of interagreement among the different values, the visual magnitude, and various cross identifications with other catalogs. The B1900 equinox has been maintained to avoid assigning yet another star number. Ancillary information, including UBV photometry, MK spectral types, data on the variability and binary nature of the stars, orbits when available, and miscellaneous information to aid in determining the reliability of the data, will be listed in the final version. The relative parallaxes are corrected to absolute parallax using newly computed corrections that are based on an improved model of the galaxy. An analysis of the resulting absolute parallaxes has been made to study the accidental and systematic errors of the parallaxes. The results of that investigation are used to arrive at a weighting system for the catalog, which then yields weighted absolute parallaxes for each star. The weighting system is still under investigation; therefore, the weighted parallaxes may change a bit in the final version. Printed copies of the catalog will be available from the Yale University Observatory when the work has been completed (late 1993?). See the file cdrom.doc which provides the original documentation by W. van Altena. (1 data file).
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
LACIE - An application of meteorology for United States and foreign wheat assessment
NASA Technical Reports Server (NTRS)
Hill, J. D.; Strommen, N. D.; Sakamoto, C. M.; Leduc, S. K.
1980-01-01
This paper describes the overall Large Area Crop Inventory Experiment technical approach utilizing the global weather-reporting network and the Landsat satellite to make a quasi-operational application of existing research results, and the accomplishments of this cooperative experiment in utilizing the weather information. Global weather data were utilized in preparing timely yield estimates for selected areas of the U.S. Great Plains, the U.S.S.R. and Canada. Additionally, wheat yield models were developed and pilot tested for Brazil, Australia, India and Argentina. The results of the work show that heading dates for wheat in North America can be predicted with an average absolute error of about 5 days for winter wheat and 4 days for spring wheat. Independent tests of wheat yield models over a 10-year period for the U.S. Great Plains produced a root-mean-square error of 1.12 quintals per hectare (q/ha) while similar tests in the U.S.S.R. produced an error of 1.31 q/ha. Research designed to improve the initial capability is described as is the rationale for further evolution of a capability to monitor global climate and assess its impact on world food supplies.
Groundwater recharge estimation in semi-arid zone: a study case from the region of Djelfa (Algeria)
NASA Astrophysics Data System (ADS)
Ali Rahmani, S. E.; Chibane, Brahim; Boucefiène, Abdelkader
2017-09-01
Deficiency of surface water resources in semi-arid area makes the groundwater the most preferred resource to assure population increased needs. In this research we are going to quantify the rate of groundwater recharge using new hybrid model tack in interest the annual rainfall and the average annual temperature and the geological characteristics of the area. This hybrid model was tested and calibrated using a chemical tracer method called Chloride mass balance method (CMB). This hybrid model is a combination between general hydrogeological model and a hydrological model. We have tested this model in an aquifer complex in the region of Djelfa (Algeria). Performance of this model was verified by five criteria [Nash, mean absolute error (MAE), Root mean square error (RMSE), the coefficient of determination and the arithmetic mean error (AME)]. These new approximations facilitate the groundwater management in semi-arid areas; this model is a perfection and amelioration of the model developed by Chibane et al. This model gives a very interesting result, with low uncertainty. A new recharge class diagram was established by our model to get rapidly and quickly the groundwater recharge value for any area in semi-arid region, using temperature and rainfall.
Spot measurement of heart rate based on morphology of PhotoPlethysmoGraphic (PPG) signals.
Madhan Mohan, P; Nagarajan, V; Vignesh, J C
2017-02-01
Due to increasing health consciousness among people, it is imperative to have low-cost health care devices to measure the vital parameters like heart rate and arterial oxygen saturation (SpO 2 ). In this paper, an efficient heart rate monitoring algorithm based on the morphology of photoplethysmography (PPG) signals to measure the spot heart rate (HR) and its real-time implementation is proposed. The algorithm does pre-processing and detects the onsets and systolic peaks of the PPG signal to estimate the heart rate of the subject. Since the algorithm is based on the morphology of the signal, it works well when the subject is not moving, which is a typical test case. So, this algorithm is developed mainly to measure the heart rate at on-demand applications. Real-time experimental results indicate the heart rate accuracy of 99.5%, mean absolute percentage error (MAPE) of 1.65%, mean absolute error (MAE) of 1.18 BPM and reference closeness factor (RCF) of 0.988. The results further show that the average response time of the algorithm to give the spot HR is 6.85 s, so that the users need not wait longer to see their HR. The hardware implementation results show that the algorithm only requires 18 KBytes of total memory and runs at high speed with 0.85 MIPS. So, this algorithm can be targeted to low-cost embedded platforms.
Quantum efficiency measurement of the Transiting Exoplanet Survey Satellite (TESS) CCD detectors
NASA Astrophysics Data System (ADS)
Krishnamurthy, A.; Villasenor, J.; Thayer, C.; Kissel, S.; Ricker, G.; Seager, S.; Lyle, R.; Deline, A.; Morgan, E.; Sauerwein, T.; Vanderspek, R.
2016-07-01
Very precise on-ground characterization and calibration of TESS CCD detectors will significantly assist in the analysis of the science data from the mission. An accurate optical test bench with very high photometric stability has been developed to perform precise measurements of the absolute quantum efficiency. The setup consists of a vacuum dewar with a single MIT Lincoln Lab CCID-80 device mounted on a cold plate with the calibrated reference photodiode mounted next to the CCD. A very stable laser-driven light source is integrated with a closed-loop intensity stabilization unit to control variations of the light source down to a few parts-per-million when averaged over 60 s. Light from the stabilization unit enters a 20 inch integrating sphere. The output light from the sphere produces near-uniform illumination on the cold CCD and on the calibrated reference photodiode inside the dewar. The ratio of the CCD and photodiode signals provides the absolute quantum efficiency measurement. The design, key features, error analysis, and results from the test campaign are presented.
Forecasting influenza in Hong Kong with Google search queries and statistical model fusion
Ramirez Ramirez, L. Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung
2017-01-01
Background The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Methods Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. Results DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. Conclusions The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient. PMID:28464015
Modeling micro-droplet formation in near-field electrohydrodynamic jet printing
NASA Astrophysics Data System (ADS)
Popell, George Colin
Near-field electrohydrodynamic jet (E-jet) printing has recently gained significant interest within the manufacturing research community because of its ability to produce micro/sub-micron-scale droplets using a wide variety of inks and substrates. However, the process currently operates in open-loop and as a result suffers from unpredictable printing quality. The use of physics-based, control-oriented process models is expected to enable closed-loop control of this printing technique. The objective of this research is to perform a fundamental study of the substrate-side droplet shape-evolution in near-field E-jet printing and to develop a physics-based model of the same that links input parameters such as voltage magnitude and ink properties to the height and diameter of the printed droplet. In order to achieve this objective, a synchronized high-speed imaging and substrate-side current-detection system was used implemented to enable a correlation between the droplet shape parameters and the measured current signal. The experimental data reveals characteristic process signatures and droplet spreading regimes. The results of these studies are then used as the basis for a model that predicts the droplet diameter and height using the measured current signal as the input. A unique scaling factor based on the measured current signal is used in this model instead of relying on empirical scaling laws found in literature. For each of the three inks tested in this study, the average absolute error in the model predictions is under 4.6% for diameter predictions and under 10.6% for height predictions of the steady-state droplet. While printing under non-conducive ambient conditions of low humidity and high temperatures, the use of the environmental correction factor in the model is seen to result in average absolute errors of 10.35% and 12.5% for diameter and height predictions.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
Validation of SenseWear Armband in children, adolescents, and adults.
Lopez, G A; Brønd, J C; Andersen, L B; Dencker, M; Arvidsson, D
2018-02-01
SenseWear Armband (SW) is a multisensor monitor to assess physical activity and energy expenditure. Its prediction algorithms have been updated periodically. The aim was to validate SW in children, adolescents, and adults. The most recent SW algorithm 5.2 (SW5.2) and the previous version 2.2 (SW2.2) were evaluated for estimation of energy expenditure during semi-structured activities in 35 children, 31 adolescents, and 36 adults with indirect calorimetry as reference. Energy expenditure estimated from waist-worn ActiGraph GT3X+ data (AG) was used for comparison. Improvements in measurement errors were demonstrated with SW5.2 compared to SW2.2, especially in children and for biking. The overall mean absolute percent error with SW5.2 was 24% in children, 23% in adolescents, and 20% in adults. The error was larger for sitting and standing (23%-32%) and for basketball and biking (19%-35%), compared to walking and running (8%-20%). The overall mean absolute error with AG was 28% in children, 22% in adolescents, and 28% in adults. The absolute percent error for biking was 32%-74% with AG. In general, SW and AG underestimated energy expenditure. However, both methods demonstrated a proportional bias, with increasing underestimation for increasing energy expenditure level, in addition to the large individual error. SW provides measures of energy expenditure level with similar accuracy in children, adolescents, and adults with the improvements in the updated algorithms. Although SW captures biking better than AG, these methods share remaining measurements errors requiring further improvements for accurate measures of physical activity and energy expenditure in clinical and epidemiological research. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Teo, Troy P; Ahmed, Syed Bilal; Kawalec, Philip; Alayoubi, Nadia; Bruce, Neil; Lyn, Ethan; Pistorius, Stephen
2018-02-01
The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined. A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes. An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data. This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data. © 2017 American Association of Physicists in Medicine.
Altitude Registration of Limb-Scattered Radiation
NASA Technical Reports Server (NTRS)
Moy, Leslie; Bhartia, Pawan K.; Jaross, Glen; Loughman, Robert; Kramarova, Natalya; Chen, Zhong; Taha, Ghassan; Chen, Grace; Xu, Philippe
2017-01-01
One of the largest constraints to the retrieval of accurate ozone profiles from UV backscatter limb sounding sensors is altitude registration. Two methods, the Rayleigh scattering attitude sensing (RSAS) and absolute radiance residual method (ARRM), are able to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors, but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique introduced in this paper, can be applied across all seasons and altitudes. However, it is only appropriate for relative altitude error estimates. The application of RSAS to Limb Profiler (LP) measurements from the Ozone Mapping and Profiler Suite (OMPS) on board the Suomi NPP (SNPP) satellite indicates tangent height (TH) errors greater than 1 km with an absolute accuracy of +/-200 m. Results using ARRM indicate a approx. 300 to 400m intra-orbital TH change varying seasonally +/-100 m, likely due to either errors in the spacecraft pointing or in the geopotential height (GPH) data that we use in our analysis. ARRM shows a change of approx. 200m over 5 years with a relative accuracy (a long-term accuracy) of 100m outside the polar regions.
The absolute radiometric calibration of the advanced very high resolution radiometer
NASA Technical Reports Server (NTRS)
Slater, P. N.; Teillet, P. M.; Ding, Y.
1988-01-01
The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.
Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Jeff D.; Wong, Raimond; Swaminath, Anand
Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
Comparison of the biometric formulas used for applanation A-scan ultrasound biometry.
Özcura, Fatih; Aktaş, Serdar; Sağdık, Hacı Murat; Tetikoğlu, Mehmet
2016-10-01
The purpose of the study was to compare the accuracy of various biometric formulas for predicting postoperative refraction determined using applanation A-scan ultrasound. This retrospective comparative study included 485 eyes that underwent uneventful phacoemulsification with intraocular lens (IOL) implantation. Applanation A-scan ultrasound biometry and postoperative manifest refraction were obtained in all eyes. Biometric data were entered into each of the five IOL power calculation formulas: SRK-II, SRK/T, Holladay I, Hoffer Q, and Binkhorst II. All eyes were divided into three groups according to axial length: short (≤22.0 mm), average (22.0-25.0 mm), and long (≥25.0 mm) eyes. The postoperative spherical equivalent was calculated and compared with the predicted refractive error using each biometric formula. The results showed that all formulas had significantly lower mean absolute error (MAE) in comparison with Binkhorst II formula (P < 0.01). The lowest MAE was obtained with the SRK-II for average (0.49 ± 0.40 D) and short (0.67 ± 0.54 D) eyes and the SRK/T for long (0.61 ± 0.50 D) eyes. The highest postoperative hyperopic shift was seen with the SRK-II for average (46.8 %), short (28.1 %), and long (48.4 %) eyes. The highest postoperative myopic shift was seen with the Holladay I for average (66.4 %) and long (71.0 %) eyes and the SRK/T for short eyes (80.6 %). In conclusion, the SRK-II formula produced the lowest MAE in average and short eyes and the SRK/T formula produced the lowest MAE in long eyes. The SRK-II has the highest postoperative hyperopic shift in all eyes. The highest postoperative myopic shift is with the Holladay I for average and long eyes and SRK/T for short eyes.
Modeling and forecasting of KLCI weekly return using WT-ANN integrated model
NASA Astrophysics Data System (ADS)
Liew, Wei-Thong; Liong, Choong-Yeun; Hussain, Saiful Izzuan; Isa, Zaidi
2013-04-01
The forecasting of weekly return is one of the most challenging tasks in investment since the time series are volatile and non-stationary. In this study, an integrated model of wavelet transform and artificial neural network, WT-ANN is studied for modeling and forecasting of KLCI weekly return. First, the WT is applied to decompose the weekly return time series in order to eliminate noise. Then, a mathematical model of the time series is constructed using the ANN. The performance of the suggested model will be evaluated by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE). The result shows that the WT-ANN model can be considered as a feasible and powerful model for time series modeling and prediction.
[Application of wavelet neural networks model to forecast incidence of syphilis].
Zhou, Xian-Feng; Feng, Zi-Jian; Yang, Wei-Zhong; Li, Xiao-Song
2011-07-01
To apply Wavelet Neural Networks (WNN) model to forecast incidence of Syphilis. Back Propagation Neural Network (BPNN) and WNN were developed based on the monthly incidence of Syphilis in Sichuan province from 2004 to 2008. The accuracy of forecast was compared between the two models. In the training approximation, the mean absolute error (MAE), rooted mean square error (RMSE) and mean absolute percentage error (MAPE) were 0.0719, 0.0862 and 11.52% respectively for WNN, and 0.0892, 0.1183 and 14.87% respectively for BPNN. The three indexes for generalization of models were 0.0497, 0.0513 and 4.60% for WNN, and 0.0816, 0.1119 and 7.25% for BPNN. WNN is a better model for short-term forecasting of Syphilis.
Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H
2015-01-01
The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barna, B.A.; Ginn, R.F.
1985-05-01
In computer programs which perform shortcut calculations for multicomponent distillation, the Gilliland correlation continues to be used even though errors of up to 60% (compared with rigorous plate-to-plate calculations) were shown by Erbar and Maddox. Average absolute differences were approximately 30% for Gilliland's correlation versus 4% for the Erbar-Maddox method. The reason the Gilliland correlation continues to be used appears to be due to the availability of an equation by Eduljee which facilitates the correlation's use in computer program. A new equation is presented in this paper that represents the Erbar-Maddox correlation of trays with reflux for multicomponent distillation. Atmore » low reflux ratios, results show more trays are needed than would be estimated by Gilliland's method.« less
Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly
2018-01-01
Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land cover map users. © 2017 John Wiley & Sons Ltd.
Dual-wavelengths photoacoustic temperature measurement
NASA Astrophysics Data System (ADS)
Liao, Yu; Jian, Xiaohua; Dong, Fenglin; Cui, Yaoyao
2017-02-01
Thermal therapy is an approach applied in cancer treatment by heating local tissue to kill the tumor cells, which requires a high sensitivity of temperature monitoring during therapy. Current clinical methods like fMRI near infrared or ultrasound for temperature measurement still have limitations on penetration depth or sensitivity. Photoacoustic temperature sensing is a newly developed temperature sensing method that has a potential to be applied in thermal therapy, which usually employs a single wavelength laser for signal generating and temperature detecting. Because of the system disturbances including laser intensity, ambient temperature and complexity of target, the accidental errors of measurement is unavoidable. For solving these problems, we proposed a new method of photoacoustic temperature sensing by using two wavelengths to reduce random error and increase the measurement accuracy in this paper. Firstly a brief theoretical analysis was deduced. Then in the experiment, a temperature measurement resolution of about 1° in the range of 23-48° in ex vivo pig blood was achieved, and an obvious decrease of absolute error was observed with averagely 1.7° in single wavelength pattern while nearly 1° in dual-wavelengths pattern. The obtained results indicates that dual-wavelengths photoacoustic sensing of temperature is able to reduce random error and improve accuracy of measuring, which could be a more efficient method for photoacoustic temperature sensing in thermal therapy of tumor.
Estimating error statistics for Chambon-la-Forêt observatory definitive data
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly
2017-08-01
We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
Predicting Real-Valued Protein Residue Fluctuation Using FlexPred.
Peterson, Lenna; Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke
2017-01-01
The conventional view of a protein structure as static provides only a limited picture. There is increasing evidence that protein dynamics are often vital to protein function including interaction with partners such as other proteins, nucleic acids, and small molecules. Considering flexibility is also important in applications such as computational protein docking and protein design. While residue flexibility is partially indicated by experimental measures such as the B-factor from X-ray crystallography and ensemble fluctuation from nuclear magnetic resonance (NMR) spectroscopy as well as computational molecular dynamics (MD) simulation, these techniques are resource-intensive. In this chapter, we describe the web server and stand-alone version of FlexPred, which rapidly predicts absolute per-residue fluctuation from a three-dimensional protein structure. On a set of 592 nonredundant structures, comparing the fluctuations predicted by FlexPred to the observed fluctuations in MD simulations showed an average correlation coefficient of 0.669 and an average root mean square error of 1.07 Å. FlexPred is available at http://kiharalab.org/flexPred/ .
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav
2013-10-28
We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.
Electrostatically Embedded Many-Body Expansion for Neutral and Charged Metalloenzyme Model Systems.
Kurbanov, Elbek K; Leverentz, Hannah R; Truhlar, Donald G; Amin, Elizabeth A
2012-01-10
The electrostatically embedded many-body (EE-MB) method has proven accurate for calculating cohesive and conformational energies in clusters, and it has recently been extended to obtain bond dissociation energies for metal-ligand bonds in positively charged inorganic coordination complexes. In the present paper, we present four key guidelines that maximize the accuracy and efficiency of EE-MB calculations for metal centers. Then, following these guidelines, we show that the EE-MB method can also perform well for bond dissociation energies in a variety of neutral and negatively charged inorganic coordination systems representing metalloenzyme active sites, including a model of the catalytic site of the zinc-bearing anthrax toxin lethal factor, a popular target for drug development. In particular, we find that the electrostatically embedded three-body (EE-3B) method is able to reproduce conventionally calculated bond-breaking energies in a series of pentacoordinate and hexacoordinate zinc-containing systems with an average absolute error (averaged over 25 cases) of only 0.98 kcal/mol.
Testing Metal-Poor Stellar Models and Isochrones with HST Parallaxes of Metal-Poor Stars
NASA Astrophysics Data System (ADS)
Chaboyer, B.; McArthur, B. E.; O'Malley, E.; Benedict, G. F.; Feiden, G. A.; Harrison, T. E.; McWilliam, A.; Nelan, E. P.; Patterson, R. J.; Sarajedini, A.
2017-02-01
Hubble Space Telescope (HST) fine guidance sensor observations were used to obtain parallaxes of eight metal-poor ([Fe/H] < -1.4) stars. The parallaxes of these stars determined by the new Hipparcos reduction average 17% accuracy, in contrast to our new HST parallaxes, which average 1% accuracy and have errors on the individual parallaxes ranging from 85 to 144 μas. These parallax data were combined with HST Advanced Camera for Surveys photometry in the F606W and F814W filters to obtain the absolute magnitudes of the stars with an accuracy of 0.02-0.03 mag. Six of these stars are on the main sequence (MS) (with -2.7 < [Fe/H] < -1.8) and are suitable for testing metal-poor stellar evolution models and determining the distances to metal-poor globular clusters (GCs). Using the abundances obtained by O’Malley et al., we find that standard stellar models using the VandenBerg & Clem color transformation do a reasonable job of matching five of the MS stars, with HD 54639 ([Fe/H] = -2.5) being anomalous in its location in the color-magnitude diagram. Stellar models and isochrones were generated using a Monte Carlo analysis to take into account uncertainties in the models. Isochrones that fit the parallax stars were used to determine the distances and ages of nine GCs (with -2.4 ≤ [Fe/H] ≤ -1.9). Averaging together the age of all nine clusters led to an absolute age of the oldest, most metal-poor GCs of 12.7 ± 1.0 Gyr, where the quoted uncertainty takes into account the known uncertainties in the stellar models and isochrones, along with the uncertainty in the distance and reddening of the clusters.
First Impressions of CARTOSAT-1
NASA Technical Reports Server (NTRS)
Lutes, James
2007-01-01
CARTOSAT-1 RPCs need special handling. Absolute accuracy of uncontrolled scenes is poor (biases > 300 m). Noticeable cross-track scale error (+/- 3-4 m across stereo pair). Most errors are either biases or linear in line/sample (These are easier to correct with ground control).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, B; Miften, M
2014-06-15
Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. We developed a method using these projections to determine the trajectory and dose of highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, where the trajectory mimicked a lung tumor with high amplitude (2.4 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each projection. A Gaussian probability density function for tumor position was calculated which best fit the observed trajectory ofmore » the BB in the imager geometry. Two methods to improve the accuracy of tumor track reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation, and second, using the Monte Carlo method to sample the estimated Gaussian tumor position distribution. 15 clinically-drawn abdominal/lung CTV volumes were used to evaluate the accuracy of the proposed methods by comparing the known and calculated BB trajectories. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square (RMS) trajectory errors were lower than 5% of marker amplitude. Use of respiratory phase information decreased RMS errors by 30%, and decreased the fraction of large errors (>3 mm) by half. Mean dose to the clinical volumes was calculated with an average error of 0.1% and average absolute error of 0.3%. Dosimetric parameters D90/D95 were determined within 0.5% of maximum dose. Monte-Carlo sampling increased RMS trajectory and dosimetric errors slightly, but prevented over-estimation of dose in trajectories with high noise. Conclusions: Tumor trajectory and dose-of-the-day were accurately calculated using CBCT projections. This technique provides a widely-available method to evaluate highly-mobile tumors, and could facilitate better strategies to mitigate or compensate for motion during SBRT.« less
NASA Astrophysics Data System (ADS)
Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.
2018-05-01
A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.
A Simple Model Predicting Individual Weight Change in Humans
Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.
2010-01-01
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Loaiza-Echeverri, A M; Bergmann, J A G; Toral, F L B; Osorio, J P; Carmo, A S; Mendonça, L F; Moustacas, V S; Henry, M
2013-03-15
The objective was to use various nonlinear models to describe scrotal circumference (SC) growth in Guzerat bulls on three farms in the state of Minas Gerais, Brazil. The nonlinear models were: Brody, Logistic, Gompertz, Richards, Von Bertalanffy, and Tanaka, where parameter A is the estimated testis size at maturity, B is the integration constant, k is a maturating index and, for the Richards and Tanaka models, m determines the inflection point. In Tanaka, A is an indefinite size of the testis, and B and k adjust the shape and inclination of the curve. A total of 7410 SC records were obtained every 3 months from 1034 bulls with ages varying between 2 and 69 months (<240 days of age = 159; 241-365 days = 451; 366-550 days = 1443; 551-730 days = 1705; and >731 days = 3652 SC measurements). Goodness of fit was evaluated by coefficients of determination (R(2)), error sum of squares, average prediction error (APE), and mean absolute deviation. The Richards model did not reach the convergence criterion. The R(2) were similar for all models (0.68-0.69). The error sum of squares was lowest for the Tanaka model. All models fit the SC data poorly in the early and late periods. Logistic was the model which best estimated SC in the early phase (based on APE and mean absolute deviation). The Tanaka and Logistic models had the lowest APE between 300 and 1600 days of age. The Logistic model was chosen for analysis of the environmental influence on parameters A and k. Based on absolute growth rate, SC increased from 0.019 cm/d, peaking at 0.025 cm/d between 318 and 435 days of age. Farm, year, and season of birth significantly affected size of adult SC and SC growth rate. An increase in SC adult size (parameter A) was accompanied by decreased SC growth rate (parameter k). In conclusion, SC growth in Guzerat bulls was characterized by an accelerated growth phase, followed by decreased growth; this was best represented by the Logistic model. The inflection point occurred at approximately 376 days of age (mean SC of 17.9 cm). We inferred that early selection of testicular size might result in smaller testes at maturity. Copyright © 2013 Elsevier Inc. All rights reserved.
Absolute Parameters for the F-type Eclipsing Binary BW Aquarii
NASA Astrophysics Data System (ADS)
Maxted, P. F. L.
2018-05-01
BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The absolute parameters of this binary (masses, radii, etc.) are known to good precision so they are often used to test stellar models, particularly in studies of convective overshooting. ... Maxted & Hutcheon (2018) analysed the Kepler K2 data for BW Aqr and noted that it shows variability between the eclipses that may be caused by tidally induced pulsations. ... Table 1 shows the absolute parameters for BW Aqr derived from an improved analysis of the Kepler K2 light curve plus the RV measurements from both Imbert (1979) and Lester & Gies (2018). ... The values in Table 1 with their robust error estimates from the standard deviation of the mean are consistent with the values and errors from Maxted & Hutcheon (2018) based on the PPD calculated using emcee for a fit to the entire K2 light curve.
Absolute measurement of the extreme UV solar flux
NASA Technical Reports Server (NTRS)
Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.
1984-01-01
A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.
Wide-field absolute transverse blood flow velocity mapping in vessel centerline
NASA Astrophysics Data System (ADS)
Wu, Nanshou; Wang, Lei; Zhu, Bifeng; Guan, Caizhong; Wang, Mingyi; Han, Dingan; Tan, Haishu; Zeng, Yaguang
2018-02-01
We propose a wide-field absolute transverse blood flow velocity measurement method in vessel centerline based on absorption intensity fluctuation modulation effect. The difference between the light absorption capacities of red blood cells and background tissue under low-coherence illumination is utilized to realize the instantaneous and average wide-field optical angiography images. The absolute fuzzy connection algorithm is used for vessel centerline extraction from the average wide-field optical angiography. The absolute transverse velocity in the vessel centerline is then measured by a cross-correlation analysis according to instantaneous modulation depth signal. The proposed method promises to contribute to the treatment of diseases, such as those related to anemia or thrombosis.
Altitude registration of limb-scattered radiation
NASA Astrophysics Data System (ADS)
Moy, Leslie; Bhartia, Pawan K.; Jaross, Glen; Loughman, Robert; Kramarova, Natalya; Chen, Zhong; Taha, Ghassan; Chen, Grace; Xu, Philippe
2017-01-01
One of the largest constraints to the retrieval of accurate ozone profiles from UV backscatter limb sounding sensors is altitude registration. Two methods, the Rayleigh scattering attitude sensing (RSAS) and absolute radiance residual method (ARRM), are able to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors, but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique introduced in this paper, can be applied across all seasons and altitudes. However, it is only appropriate for relative altitude error estimates. The application of RSAS to Limb Profiler (LP) measurements from the Ozone Mapping and Profiler Suite (OMPS) on board the Suomi NPP (SNPP) satellite indicates tangent height (TH) errors greater than 1 km with an absolute accuracy of ±200 m. Results using ARRM indicate a ˜ 300 to 400 m intra-orbital TH change varying seasonally ±100 m, likely due to either errors in the spacecraft pointing or in the geopotential height (GPH) data that we use in our analysis. ARRM shows a change of ˜ 200 m over ˜ 5 years with a relative accuracy (a long-term accuracy) of ±100 m outside the polar regions.
Assessing the accuracy of ANFIS, EEMD-GRNN, PCR, and MLR models in predicting PM2.5
NASA Astrophysics Data System (ADS)
Ausati, Shadi; Amanollahi, Jamil
2016-10-01
Since Sanandaj is considered one of polluted cities of Iran, prediction of any type of pollution especially prediction of suspended particles of PM2.5, which are the cause of many diseases, could contribute to health of society by timely announcements and prior to increase of PM2.5. In order to predict PM2.5 concentration in the Sanandaj air the hybrid models consisting of an ensemble empirical mode decomposition and general regression neural network (EEMD-GRNN), Adaptive Neuro-Fuzzy Inference System (ANFIS), principal component regression (PCR), and linear model such as multiple liner regression (MLR) model were used. In these models the data of suspended particles of PM2.5 were the dependent variable and the data related to air quality including PM2.5, PM10, SO2, NO2, CO, O3 and meteorological data including average minimum temperature (Min T), average maximum temperature (Max T), average atmospheric pressure (AP), daily total precipitation (TP), daily relative humidity level of the air (RH) and daily wind speed (WS) for the year 2014 in Sanandaj were the independent variables. Among the used models, EEMD-GRNN model with values of R2 = 0.90, root mean square error (RMSE) = 4.9218 and mean absolute error (MAE) = 3.4644 in the training phase and with values of R2 = 0.79, RMSE = 5.0324 and MAE = 3.2565 in the testing phase, exhibited the best function in predicting this phenomenon. It can be concluded that hybrid models have accurate results to predict PM2.5 concentration compared with linear model.
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2010 CFR
2010-10-01
... absolute (except where mathematical or measurement error or changed physical conditions can be demonstrated... a mathematical or measurement error or changed physical conditions, then the specific source of the... registered professional engineer or licensed land surveyor, of the new data necessary for FEMA to conduct a...
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-10-14
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-01-01
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412
NASA Technical Reports Server (NTRS)
Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)
2004-01-01
Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.
2016-01-01
Modeling and prediction of polar organic chemical integrative sampler (POCIS) sampling rates (Rs) for 73 compounds using artificial neural networks (ANNs) is presented for the first time. Two models were constructed: the first was developed ab initio using a genetic algorithm (GSD-model) to shortlist 24 descriptors covering constitutional, topological, geometrical and physicochemical properties and the second model was adapted for Rs prediction from a previous chromatographic retention model (RTD-model). Mechanistic evaluation of descriptors showed that models did not require comprehensive a priori information to predict Rs. Average predicted errors for the verification and blind test sets were 0.03 ± 0.02 L d–1 (RTD-model) and 0.03 ± 0.03 L d–1 (GSD-model) relative to experimentally determined Rs. Prediction variability in replicated models was the same or less than for measured Rs. Networks were externally validated using a measured Rs data set of six benzodiazepines. The RTD-model performed best in comparison to the GSD-model for these compounds (average absolute errors of 0.0145 ± 0.008 L d–1 and 0.0437 ± 0.02 L d–1, respectively). Improvements to generalizability of modeling approaches will be reliant on the need for standardized guidelines for Rs measurement. The use of in silico tools for Rs determination represents a more economical approach than laboratory calibrations. PMID:27363449
Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy
Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro
2016-01-01
The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799
Optimal quantum error correcting codes from absolutely maximally entangled states
NASA Astrophysics Data System (ADS)
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong
Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less
Quantitative endoscopy: initial accuracy measurements.
Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P
2000-02-01
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
Hassett, Michael J; Uno, Hajime; Cronin, Angel M; Carroll, Nikki M; Hornbrook, Mark C; Ritzwoller, Debra
2017-12-01
Recurrent cancer is common, costly, and lethal, yet we know little about it in community-based populations. Electronic health records and tumor registries contain vast amounts of data regarding community-based patients, but usually lack recurrence status. Existing algorithms that use structured data to detect recurrence have limitations. We developed algorithms to detect the presence and timing of recurrence after definitive therapy for stages I-III lung and colorectal cancer using 2 data sources that contain a widely available type of structured data (claims or electronic health record encounters) linked to gold-standard recurrence status: Medicare claims linked to the Cancer Care Outcomes Research and Surveillance study, and the Cancer Research Network Virtual Data Warehouse linked to registry data. Twelve potential indicators of recurrence were used to develop separate models for each cancer in each data source. Detection models maximized area under the ROC curve (AUC); timing models minimized average absolute error. Algorithms were compared by cancer type/data source, and contrasted with an existing binary detection rule. Detection model AUCs (>0.92) exceeded existing prediction rules. Timing models yielded absolute prediction errors that were small relative to follow-up time (<15%). Similar covariates were included in all detection and timing algorithms, though differences by cancer type and dataset challenged efforts to create 1 common algorithm for all scenarios. Valid and reliable detection of recurrence using big data is feasible. These tools will enable extensive, novel research on quality, effectiveness, and outcomes for lung and colorectal cancer patients and those who develop recurrence.
Wu, J; Awate, S P; Licht, D J; Clouchoux, C; du Plessis, A J; Avants, B B; Vossough, A; Gee, J C; Limperopoulos, C
2015-07-01
Traditional methods of dating a pregnancy based on history or sonographic assessment have a large variation in the third trimester. We aimed to assess the ability of various quantitative measures of brain cortical folding on MR imaging in determining fetal gestational age in the third trimester. We evaluated 8 different quantitative cortical folding measures to predict gestational age in 33 healthy fetuses by using T2-weighted fetal MR imaging. We compared the accuracy of the prediction of gestational age by these cortical folding measures with the accuracy of prediction by brain volume measurement and by a previously reported semiquantitative visual scale of brain maturity. Regression models were constructed, and measurement biases and variances were determined via a cross-validation procedure. The cortical folding measures are accurate in the estimation and prediction of gestational age (mean of the absolute error, 0.43 ± 0.45 weeks) and perform better than (P = .024) brain volume (mean of the absolute error, 0.72 ± 0.61 weeks) or sonography measures (SDs approximately 1.5 weeks, as reported in literature). Prediction accuracy is comparable with that of the semiquantitative visual assessment score (mean, 0.57 ± 0.41 weeks). Quantitative cortical folding measures such as global average curvedness can be an accurate and reliable estimator of gestational age and brain maturity for healthy fetuses in the third trimester and have the potential to be an indicator of brain-growth delays for at-risk fetuses and preterm neonates. © 2015 by American Journal of Neuroradiology.
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
NASA Astrophysics Data System (ADS)
Androsov, Alexey; Nerger, Lars; Schnur, Reiner; Schröter, Jens; Albertella, Alberta; Rummel, Reiner; Savcenko, Roman; Bosch, Wolfgang; Skachko, Sergey; Danilov, Sergey
2018-05-01
General ocean circulation models are not perfect. Forced with observed atmospheric fluxes they gradually drift away from measured distributions of temperature and salinity. We suggest data assimilation of absolute dynamical ocean topography (DOT) observed from space geodetic missions as an option to reduce these differences. Sea surface information of DOT is transferred into the deep ocean by defining the analysed ocean state as a weighted average of an ensemble of fully consistent model solutions using an error-subspace ensemble Kalman filter technique. Success of the technique is demonstrated by assimilation into a global configuration of the ocean circulation model FESOM over 1 year. The dynamic ocean topography data are obtained from a combination of multi-satellite altimetry and geoid measurements. The assimilation result is assessed using independent temperature and salinity analysis derived from profiling buoys of the AGRO float data set. The largest impact of the assimilation occurs at the first few analysis steps where both the model ocean topography and the steric height (i.e. temperature and salinity) are improved. The continued data assimilation over 1 year further improves the model state gradually. Deep ocean fields quickly adjust in a sustained manner: A model forecast initialized from the model state estimated by the data assimilation after only 1 month shows that improvements induced by the data assimilation remain in the model state for a long time. Even after 11 months, the modelled ocean topography and temperature fields show smaller errors than the model forecast without any data assimilation.
Influence of non-level walking on pedometer accuracy.
Leicht, Anthony S; Crowther, Robert G
2009-05-01
The YAMAX Digiwalker pedometer has been previously confirmed as a valid and reliable monitor during level walking, however, little is known about its accuracy during non-level walking activities or between genders. Subsequently, this study examined the influence of non-level walking and gender on pedometer accuracy. Forty-six healthy adults completed 3-min bouts of treadmill walking at their normal walking pace during 11 inclines (0-10%) while another 123 healthy adults completed walking up and down 47 stairs. During walking, participants wore a YAMAX Digiwalker SW-700 pedometer with the number of steps taken and registered by the pedometer recorded. Pedometer difference (steps registered-steps taken), net error (% of steps taken), absolute error (absolute % of steps taken) and gender were examined by repeated measures two-way ANOVA and Tukey's post hoc tests. During incline walking, pedometer accuracy indices were similar between inclines and gender except for a significantly greater step difference (-7+/-5 steps vs. 1+/-4 steps) and net error (-2.4+/-1.8% for 9% vs. 0.4+/-1.2% for 2%). Step difference and net error were significantly greater during stair descent compared to stair ascent while absolute error was significantly greater during stair ascent compared to stair descent. The current study demonstrated that the YAMAX Digiwalker SW-700 pedometer exhibited good accuracy during incline walking up to 10% while it overestimated steps taken during stair ascent/descent with greater overestimation during stair descent. Stair walking activity should be documented in field studies as the YAMAX Digiwalker SW-700 pedometer overestimates this activity type.
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.
Application of a Hybrid Model for Predicting the Incidence of Tuberculosis in Hubei, China
Zhang, Guoliang; Huang, Shuqiong; Duan, Qionghong; Shu, Wen; Hou, Yongchun; Zhu, Shiyu; Miao, Xiaoping; Nie, Shaofa; Wei, Sheng; Guo, Nan; Shan, Hua; Xu, Yihua
2013-01-01
Background A prediction model for tuberculosis incidence is needed in China which may be used as a decision-supportive tool for planning health interventions and allocating health resources. Methods The autoregressive integrated moving average (ARIMA) model was first constructed with the data of tuberculosis report rate in Hubei Province from Jan 2004 to Dec 2011.The data from Jan 2012 to Jun 2012 were used to validate the model. Then the generalized regression neural network (GRNN)-ARIMA combination model was established based on the constructed ARIMA model. Finally, the fitting and prediction accuracy of the two models was evaluated. Results A total of 465,960 cases were reported between Jan 2004 and Dec 2011 in Hubei Province. The report rate of tuberculosis was highest in 2005 (119.932 per 100,000 population) and lowest in 2010 (84.724 per 100,000 population). The time series of tuberculosis report rate show a gradual secular decline and a striking seasonal variation. The ARIMA (2, 1, 0) × (0, 1, 1)12 model was selected from several plausible ARIMA models. The residual mean square error of the GRNN-ARIMA model and ARIMA model were 0.4467 and 0.6521 in training part, and 0.0958 and 0.1133 in validation part, respectively. The mean absolute error and mean absolute percentage error of the hybrid model were also less than the ARIMA model. Discussion and Conclusions The gradual decline in tuberculosis report rate may be attributed to the effect of intensive measures on tuberculosis. The striking seasonal variation may have resulted from several factors. We suppose that a delay in the surveillance system may also have contributed to the variation. According to the fitting and prediction accuracy, the hybrid model outperforms the traditional ARIMA model, which may facilitate the allocation of health resources in China. PMID:24223232
Wiesinger, Florian; Bylund, Mikael; Yang, Jaewon; Kaushik, Sandeep; Shanbhag, Dattesh; Ahn, Sangtae; Jonsson, Joakim H; Lundman, Josef A; Hope, Thomas; Nyholm, Tufve; Larson, Peder; Cozzini, Cristina
2018-02-18
To describe a method for converting Zero TE (ZTE) MR images into X-ray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction (AC) in PET/MR and (2) dose planning in MR-guided radiation therapy planning (RTP). Proton density-weighted ZTE images were acquired as input for MR-based pseudo-CT conversion, providing (1) efficient capture of short-lived bone signals, (2) flat soft-tissue contrast, and (3) fast and robust 3D MR imaging. After bias correction and normalization, the images were segmented into bone, soft-tissue, and air by means of thresholding and morphological refinements. Fixed Hounsfield replacement values were assigned for air (-1000 HU) and soft-tissue (+42 HU), whereas continuous linear mapping was used for bone. The obtained ZTE-derived pseudo-CT images accurately resembled the true CT images (i.e., Dice coefficient for bone overlap of 0.73 ± 0.08 and mean absolute error of 123 ± 25 HU evaluated over the whole head, including errors from residual registration mismatches in the neck and mouth regions). The linear bone mapping accounted for bone density variations. Averaged across five patients, ZTE-based AC demonstrated a PET error of -0.04 ± 1.68% relative to CT-based AC. Similarly, for RTP assessed in eight patients, the absolute dose difference over the target volume was found to be 0.23 ± 0.42%. The described method enables MR to pseudo-CT image conversion for the head in an accurate, robust, and fast manner without relying on anatomical prior knowledge. Potential applications include PET/MR-AC, and MR-guided RTP. © 2018 International Society for Magnetic Resonance in Medicine.
Farsalinos, Konstantinos E; Daraban, Ana M; Ünlü, Serkan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe
2015-10-01
This study was planned by the EACVI/ASE/Industry Task Force to Standardize Deformation Imaging to (1) test the variability of speckle-tracking global longitudinal strain (GLS) measurements among different vendors and (2) compare GLS measurement variability with conventional echocardiographic parameters. Sixty-two volunteers were studied using ultrasound systems from seven manufacturers. Each volunteer was examined by the same sonographer on all machines. Inter- and intraobserver variability was determined in a true test-retest setting. Conventional echocardiographic parameters were acquired for comparison. Using the software packages of the respective manufacturer and of two software-only vendors, endocardial GLS was measured because it was the only GLS parameter that could be provided by all manufactures. We compared GLSAV (the average from the three apical views) and GLS4CH (measured in the four-chamber view) measurements among vendors and with the conventional echocardiographic parameters. Absolute values of GLSAV ranged from 18.0% to 21.5%, while GLS4CH ranged from 17.9% to 21.4%. The absolute difference between vendors for GLSAV was up to 3.7% strain units (P < .001). The interobserver relative mean errors were 5.4% to 8.6% for GLSAV and 6.2% to 11.0% for GLS4CH, while the intraobserver relative mean errors were 4.9% to 7.3% and 7.2% to 11.3%, respectively. These errors were lower than for left ventricular ejection fraction and most other conventional echocardiographic parameters. Reproducibility of GLS measurements was good and in many cases superior to conventional echocardiographic measurements. The small but statistically significant variation among vendors should be considered in performing serial studies and reflects a reference point for ongoing standardization efforts. Copyright © 2015 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Extensive TD-DFT Benchmark: Singlet-Excited States of Organic Molecules.
Jacquemin, Denis; Wathelet, Valérie; Perpète, Eric A; Adamo, Carlo
2009-09-08
Extensive Time-Dependent Density Functional Theory (TD-DFT) calculations have been carried out in order to obtain a statistically meaningful analysis of the merits of a large number of functionals. To reach this goal, a very extended set of molecules (∼500 compounds, >700 excited states) covering a broad range of (bio)organic molecules and dyes have been investigated. Likewise, 29 functionals including LDA, GGA, meta-GGA, global hybrids, and long-range-corrected hybrids have been considered. Comparisons with both theoretical references and experimental measurements have been carried out. On average, the functionals providing the best match with reference data are, one the one hand, global hybrids containing between 22% and 25% of exact exchange (X3LYP, B98, PBE0, and mPW1PW91) and, on the other hand, a long-range-corrected hybrid with a less-rapidly increasing HF ratio, namely LC-ωPBE(20). Pure functionals tend to be less consistent, whereas functionals incorporating a larger fraction of exact exchange tend to underestimate significantly the transition energies. For most treated cases, the M05 and CAM-B3LYP schemes deliver fairly small deviations but do not outperform standard hybrids such as X3LYP or PBE0, at least within the vertical approximation. With the optimal functionals, one obtains mean absolute deviations smaller than 0.25 eV, though the errors significantly depend on the subset of molecules or states considered. As an illustration, PBE0 and LC-ωPBE(20) provide a mean absolute error of only 0.14 eV for the 228 states related to neutral organic dyes but are completely off target for cyanine-like derivatives. On the basis of comparisons with theoretical estimates, it also turned out that CC2 and TD-DFT errors are of the same order of magnitude, once the above-mentioned hybrids are selected.
Leverentz, Hannah R; Truhlar, Donald G
2009-06-09
This work tests the capability of the electrostatically embedded many-body (EE-MB) method to calculate accurate (relative to conventional calculations carried out at the same level of electronic structure theory and with the same basis set) binding energies of mixed clusters (as large as 9-mers) consisting of water, ammonia, sulfuric acid, and ammonium and bisulfate ions. This work also investigates the dependence of the accuracy of the EE-MB approximation on the type and origin of the charges used for electrostatically embedding these clusters. The conclusions reached are that for all of the clusters and sets of embedding charges studied in this work, the electrostatically embedded three-body (EE-3B) approximation is capable of consistently yielding relative errors of less than 1% and an average relative absolute error of only 0.3%, and that the performance of the EE-MB approximation does not depend strongly on the specific set of embedding charges used. The electrostatically embedded pairwise approximation has errors about an order of magnitude larger than EE-3B. This study also explores the question of why the accuracy of the EE-MB approximation shows such little dependence on the types of embedding charges employed.
NASA Astrophysics Data System (ADS)
Maheshwera Reddy Paturi, Uma; Devarasetti, Harish; Abimbola Fadare, David; Reddy Narala, Suresh Kumar
2018-04-01
In the present paper, the artificial neural network (ANN) and response surface methodology (RSM) are used in modeling of surface roughness in WS2 (tungsten disulphide) solid lubricant assisted minimal quantity lubrication (MQL) machining. The real time MQL turning of Inconel 718 experimental data considered in this paper was available in the literature [1]. In ANN modeling, performance parameters such as mean square error (MSE), mean absolute percentage error (MAPE) and average error in prediction (AEP) for the experimental data were determined based on Levenberg–Marquardt (LM) feed forward back propagation training algorithm with tansig as transfer function. The MATLAB tool box has been utilized in training and testing of neural network model. Neural network model with three input neurons, one hidden layer with five neurons and one output neuron (3-5-1 architecture) is found to be most confidence and optimal. The coefficient of determination (R2) for both the ANN and RSM model were seen to be 0.998 and 0.982 respectively. The surface roughness predictions from ANN and RSM model were related with experimentally measured values and found to be in good agreement with each other. However, the prediction efficacy of ANN model is relatively high when compared with RSM model predictions.
Intrinsic coincident linear polarimetry using stacked organic photovoltaics.
Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W
2016-06-27
Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.
A probabilistic approach to the drag-based model
NASA Astrophysics Data System (ADS)
Napoletano, Gianluca; Forte, Roberta; Moro, Dario Del; Pietropaolo, Ermanno; Giovannelli, Luca; Berrilli, Francesco
2018-02-01
The forecast of the time of arrival (ToA) of a coronal mass ejection (CME) to Earth is of critical importance for our high-technology society and for any future manned exploration of the Solar System. As critical as the forecast accuracy is the knowledge of its precision, i.e. the error associated to the estimate. We propose a statistical approach for the computation of the ToA using the drag-based model by introducing the probability distributions, rather than exact values, as input parameters, thus allowing the evaluation of the uncertainty on the forecast. We test this approach using a set of CMEs whose transit times are known, and obtain extremely promising results: the average value of the absolute differences between measure and forecast is 9.1h, and half of these residuals are within the estimated errors. These results suggest that this approach deserves further investigation. We are working to realize a real-time implementation which ingests the outputs of automated CME tracking algorithms as inputs to create a database of events useful for a further validation of the approach.
NASA Astrophysics Data System (ADS)
Prentice, Boone M.; Chumbley, Chad W.; Caprioli, Richard M.
2017-01-01
Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) allows for the visualization of molecular distributions within tissue sections. While providing excellent molecular specificity and spatial information, absolute quantification by MALDI IMS remains challenging. Especially in the low molecular weight region of the spectrum, analysis is complicated by matrix interferences and ionization suppression. Though tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity and improve sensitivity by eliminating chemical noise, typical MALDI MS/MS modalities only scan for a single MS/MS event per laser shot. Herein, we describe TOF/TOF instrumentation that enables multiple fragmentation events to be performed in a single laser shot, allowing the intensity of the analyte to be referenced to the intensity of the internal standard in each laser shot while maintaining the benefits of MS/MS. This approach is illustrated by the quantitative analyses of rifampicin (RIF), an antibiotic used to treat tuberculosis, in pooled human plasma using rifapentine (RPT) as an internal standard. The results show greater than 4-fold improvements in relative standard deviation as well as improved coefficients of determination (R2) and accuracy (>93% quality controls, <9% relative errors). This technology is used as an imaging modality to measure absolute RIF concentrations in liver tissue from an animal dosed in vivo. Each microspot in the quantitative image measures the local RIF concentration in the tissue section, providing absolute pixel-to-pixel quantification from different tissue microenvironments. The average concentration determined by IMS is in agreement with the concentration determined by HPLC-MS/MS, showing a percent difference of 10.6%.
Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Mark; Tuen Mun Hospital, Hong Kong; Grehn, Melanie
Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with themore » original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.« less
Development of a Dual-Pump CARS System for Measurements in a Supersonic Combusting Free Jet
NASA Technical Reports Server (NTRS)
Magnotti, Gaetano; Cutler, Andrew D.; Danehy, Paul
2012-01-01
This work describes the development of a dual-pump CARS system for simultaneous measurements of temperature and absolute mole fraction of N2, O2 and H2 in a laboratory scale supersonic combusting free jet. Changes to the experimental set-up and the data analysis to improve the quality of the measurements in this turbulent, high-temperature reacting flow are described. The accuracy and precision of the instrument have been determined using data collected in a Hencken burner flame. For temperature above 800 K, errors in absolute mole fraction are within 1.5, 0.5, and 1% of the total composition for N2, O2 and H2, respectively. Estimated standard deviations based on 500 single shots are between 10 and 65 K for the temperature, between 0.5 and 1.7% of the total composition for O2, and between 1.5 and 3.4% for N2. The standard deviation of H2 is 10% of the average measured mole fraction. Results obtained in the jet with and without combustion are illustrated, and the capabilities and limitations of the dual-pump CARS instrument discussed.
Xin, Yong; Wang, Jia-Yang; Li, Liang; Tang, Tian-You; Liu, Gui-Hong; Wang, Jian-She; Xu, Yu-Mei; Chen, Yong; Zhang, Long-Zhen
2012-01-01
To make sure the feasibility with (18F)FDG PET/CT to guided dynamic intensity-modulated radiation therapy (IMRT) for nasopharyngeal carcinoma patients, by dosimetric verification before treatment. Chose 11 patients in III~IVA nasopharyngeal carcinoma treated with functional image-guided IMRT and absolute and relative dosimetric verification by Varian 23EX LA, ionization chamber, 2DICA of I'mRT Matrixx and IBA detachable phantom. Drawing outline and making treatment plan were by different imaging techniques (CT and (18F)FDG PET/CT). The dose distributions of the various regional were realized by SMART. The absolute mean errors of interest area were 2.39%±0.66 using 0.6 cc ice chamber. Results using DTA method, the average relative dose measurements within our protocol (3%, 3 mm) were 87.64% at 300 MU/min in all filed. Dosimetric verification before IMRT is obligatory and necessary. Ionization chamber and 2DICA of I'mRT Matrixx was the effective dosimetric verification tool for primary focal hyper metabolism in functional image-guided dynamic IMRT for nasopharyngeal carcinoma. Our preliminary evidence indicates that functional image-guided dynamic IMRT is feasible.
Rapid rotators revisited: absolute dimensions of KOI-13
NASA Astrophysics Data System (ADS)
Howarth, Ian D.; Morello, Giuseppe
2017-09-01
We analyse Kepler light-curves of the exoplanet Kepler Object of Interest no. 13b (KOI-13b) transiting its moderately rapidly rotating (gravity-darkened) parent star. A physical model, with minimal ad hoc free parameters, reproduces the time-averaged light-curve at the ˜10 parts per million level. We demonstrate that this Roche-model solution allows the absolute dimensions of the system to be determined from the star's projected equatorial rotation speed, ve sin I*, without any additional assumptions; we find a planetary radius RP = (1.33 ± 0.05) R♃, stellar polar radius Rp★ = (1.55 ± 0.06) R⊙, combined mass M* + MP( ≃ M*) = (1.47 ± 0.17) M⊙ and distance d ≃ (370 ± 25) pc, where the errors are dominated by uncertainties in relative flux contribution of the visual-binary companion KOI-13B. The implied stellar rotation period is within ˜5 per cent of the non-orbital, 25.43-hr signal found in the Kepler photometry. We show that the model accurately reproduces independent tomographic observations, and yields an offset between orbital and stellar-rotation angular-momentum vectors of 60.25° ± 0.05°.
Inertial Sensor Error Reduction through Calibration and Sensor Fusion.
Lambrecht, Stefan; Nogueira, Samuel L; Bortole, Magdo; Siqueira, Adriano A G; Terra, Marco H; Rocon, Eduardo; Pons, José L
2016-02-17
This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking.
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems.
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems
NASA Astrophysics Data System (ADS)
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Bay of Fundy verification of a system for multidate Landsat measurement of suspended sediment
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Afoldi, T. T.; Amos, C. L.
1981-01-01
A system for automated multidate Landsat CCT MSS measurement of suspended sediment concentration (S) has been implemented and verified on nine sets (108 points) of data from the Bay of Fundy, Canada. The system employs 'chromaticity analysis' to provide automatic pixel-by-pixel adjustment of atmospheric variations, permitting reference calibration data from one or several dates to be spatially and temporally extrapolated to other regions and to other dates. For verification, each data set was used in turn as test data against the remainder as a calibration set: the average absolute error was 44 percent of S over the range 1-1000 mg/l. The system can be used to measure chlorophyll (in the absence of atmospheric variations), Secchi disk depth, and turbidity.
Model-based registration for assessment of spinal deformities in idiopathic scoliosis
NASA Astrophysics Data System (ADS)
Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Knutsson, Hans
2014-01-01
Detailed analysis of spinal deformity is important within orthopaedic healthcare, in particular for assessment of idiopathic scoliosis. This paper addresses this challenge by proposing an image analysis method, capable of providing a full three-dimensional spine characterization. The proposed method is based on the registration of a highly detailed spine model to image data from computed tomography. The registration process provides an accurate segmentation of each individual vertebra and the ability to derive various measures describing the spinal deformity. The derived measures are estimated from landmarks attached to the spine model and transferred to the patient data according to the registration result. Evaluation of the method provides an average point-to-surface error of 0.9 mm ± 0.9 (comparing segmentations), and an average target registration error of 2.3 mm ± 1.7 (comparing landmarks). Comparing automatic and manual measurements of axial vertebral rotation provides a mean absolute difference of 2.5° ± 1.8, which is on a par with other computerized methods for assessing axial vertebral rotation. A significant advantage of our method, compared to other computerized methods for rotational measurements, is that it does not rely on vertebral symmetry for computing the rotational measures. The proposed method is fully automatic and computationally efficient, only requiring three to four minutes to process an entire image volume covering vertebrae L5 to T1. Given the use of landmarks, the method can be readily adapted to estimate other measures describing a spinal deformity by changing the set of employed landmarks. In addition, the method has the potential to be utilized for accurate segmentations of the vertebrae in routine computed tomography examinations, given the relatively low point-to-surface error.
A Typology for Charting Socioeconomic Mortality Gradients: "Go Southwest".
Blakely, Tony; Disney, George; Atkinson, June; Teng, Andrea; Mackenbach, Johan P
2017-07-01
Holistic depiction of time-trends in average mortality rates, and absolute and relative inequalities, is challenging. We outline a typology for situations with falling average mortality rates (m↓; e.g., cardiovascular disease), rates stable over time (m-; e.g., some cancers), and increasing average mortality rates (m↑; e.g., suicide in some contexts). If we consider inequality trends on both the absolute (a) and relative (r) scales, there are 13 possible combination of m, a, and r trends over time. They can be mapped to graphs with relative inequality (log relative index of inequality [RII]; r) on the y axis, log average mortality rate on the x axis (m), and absolute inequality (slope index of inequality; SII; a) as contour lines. We illustrate this by plotting adult mortality trends: (1) by household income from 1981 to 2011 for New Zealand, and (2) by education for European countries. Types range from the "best" m↓a↓r↓ (average, absolute, and relative inequalities all decreasing; southwest movement in graphs) to the "worst" m↑a↑r↑ (northeast). Mortality typologies in New Zealand (all-cause, cardiovascular disease, nonlung cancer, and unintentional injury) were all m↓r↑ (northwest), but variable with respect to absolute inequality. Most European typologies were m↓r↑ types (northwest; e.g., Finland), but with notable exceptions of m-a↑r↑ (north; e.g., Hungary) and "best" or southwest m↓a↓r↓ for Spain (Barcelona) females. Our typology and corresponding graphs provide a convenient way to summarize and understand past trends in inequalities in mortality, and hold potential for projecting future trends and target setting.
Treleaven, Julia; Takasaki, Hiroshi
2015-02-01
Subjective visual vertical (SVV) assesses visual dependence for spacial orientation, via vertical perception testing. Using the computerized rod-and-frame test (CRFT), SVV is thought to be an important measure of cervical proprioception and might be greater in those with whiplash associated disorder (WAD), but to date research findings are inconsistent. The aim of this study was to investigate the most sensitive SVV error measurement to detect group differences between no neck pain control, idiopathic neck pain (INP) and WAD subjects. Cross sectional study. Neck Disability Index (NDI), Dizziness Handicap Inventory short form (DHIsf) and the average constant error (CE), absolute error (AE), root mean square error (RMSE), and variable error (VE) of the SVV were obtained from 142 subjects (48 asymptomatic, 36 INP, 42 WAD). The INP group had significantly (p < 0.03) greater VE and RMSE when compared to both the control and WAD groups. There were no differences seen between the WAD and controls. The results demonstrated that people with INP (not WAD), had an altered strategy for maintaining the perception of vertical by increasing variability of performance. This may be due to the complexity of the task. Further, the SVV performance was not related to reported pain or dizziness handicap. These findings are inconsistent with other measures of cervical proprioception in neck pain and more research is required before the SVV can be considered an important measure and utilized clinically. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Microwave Resonator Measurements of Atmospheric Absorption Coefficients: A Preliminary Design Study
NASA Technical Reports Server (NTRS)
Walter, Steven J.; Spilker, Thomas R.
1995-01-01
A preliminary design study examined the feasibility of using microwave resonator measurements to improve the accuracy of atmospheric absorption coefficients and refractivity between 18 and 35 GHz. Increased accuracies would improve the capability of water vapor radiometers to correct for radio signal delays caused by Earth's atmosphere. Calibration of delays incurred by radio signals traversing the atmosphere has applications to both deep space tracking and planetary radio science experiments. Currently, the Cassini gravity wave search requires 0.8-1.0% absorption coefficient accuracy. This study examined current atmospheric absorption models and estimated that current model accuracy ranges from 5% to 7%. The refractivity of water vapor is known to 1% accuracy, while the refractivity of many dry gases (oxygen, nitrogen, etc.) are known to better than 0.1%. Improvements to the current generation of models will require that both the functional form and absolute absorption of the water vapor spectrum be calibrated and validated. Several laboratory techniques for measuring atmospheric absorption and refractivity were investigated, including absorption cells, single and multimode rectangular cavity resonators, and Fabry-Perot resonators. Semi-confocal Fabry-Perot resonators were shown to provide the most cost-effective and accurate method of measuring atmospheric gas refractivity. The need for accurate environmental measurement and control was also addressed. A preliminary design for the environmental control and measurement system was developed to aid in identifying significant design issues. The analysis indicated that overall measurement accuracy will be limited by measurement errors and imprecise control of the gas sample's thermodynamic state, thermal expansion and vibration- induced deformation of the resonator structure, and electronic measurement error. The central problem is to identify systematic errors because random errors can be reduced by averaging. Calibrating the resonator measurements by checking the refractivity of dry gases which are known to better than 0.1% provides a method of controlling the systematic errors to 0.1%. The primary source of error in absorptivity and refractivity measurements is thus the ability to measure the concentration of water vapor in the resonator path. Over the whole thermodynamic range of interest the accuracy of water vapor measurement is 1.5%. However, over the range responsible for most of the radio delay (i.e. conditions in the bottom two kilometers of the atmosphere) the accuracy of water vapor measurements ranges from 0.5% to 1.0%. Therefore the precision of the resonator measurements could be held to 0.3% and the overall absolute accuracy of resonator-based absorption and refractivity measurements will range from 0.6% to 1.
Proprioceptive deficit in patients with complete tearing of the anterior cruciate ligament.
Godinho, Pedro; Nicoliche, Eduardo; Cossich, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio
2014-01-01
To investigate the existence of proprioceptive deficits between the injured limb and the uninjured (i.e. contralateral normal) limb, in individuals who suffered complete tearing of the anterior cruciate ligament (ACL), using a strength reproduction test. Sixteen patients with complete tearing of the ACL participated in the study. A voluntary maximum isometric strength test was performed, with reproduction of the muscle strength in the limb with complete tearing of the ACL and the healthy contralateral limb, with the knee flexed at 60°. The meta-intensity was used for the procedure of 20% of the voluntary maximum isometric strength. The proprioceptive performance was determined by means of absolute error, variable error and constant error values. Significant differences were found between the control group and ACL group for the variables of absolute error (p = 0.05) and constant error (p = 0.01). No difference was found in relation to variable error (p = 0.83). Our data corroborate the hypothesis that there is a proprioceptive deficit in subjects with complete tearing of the ACL in an injured limb, in comparison with the uninjured limb, during evaluation of the sense of strength. This deficit can be explained in terms of partial or total loss of the mechanoreceptors of the ACL.
Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo
2008-01-01
In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.
Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime
2018-06-14
The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morelli, Marco, E-mail: marco.morelli1@unimi.it; Masini, Andrea, E-mail: andrea.masini@flyby.it; Ruffini, Fabrizio, E-mail: fabrizio.ruffini@i-em.eu
We present innovative web tools, developed also in the frame of the FP7 ENDORSE (ENergy DOwnstReam SErvices) project, for the performance analysis and the support in planning of solar energy plants (PV, CSP, CPV). These services are based on the combination between the detailed physical model of each part of the plants and the near real-time satellite remote sensing of incident solar irradiance. Starting from the solar Global Horizontal Irradiance (GHI) data provided by the Monitoring Atmospheric Composition and Climate (GMES-MACC) Core Service and based on the elaboration of Meteosat Second Generation (MSG) satellite optical imagery, the Global Tilted Irradiancemore » (GTI) or the Beam Normal Irradiance (BNI) incident on plant's solar PV panels (or solar receivers for CSP or CPV) is calculated. Combining these parameters with the model of the solar power plant, using also air temperature values, we can assess in near-real-time the daily evolution of the alternate current (AC) power produced by the plant. We are therefore able to compare this satellite-based AC power yield with the actually measured one and, consequently, to readily detect any possible malfunctions and to evaluate the performances of the plant (so-called “Controller” service). Besides, the same method can be applied to satellite-based averaged environmental data (solar irradiance and air temperature) in order to provide a Return on Investment analysis in support to the planning of new solar energy plants (so-called “Planner” service). This method has been successfully applied to three test solar plants (in North, Centre and South Italy respectively) and it has been validated by comparing satellite-based and in-situ measured hourly AC power data for several months in 2013 and 2014. The results show a good accuracy: the overall Normalized Bias (NB) is − 0.41%, the overall Normalized Mean Absolute Error (NMAE) is 4.90%, the Normalized Root Mean Square Error (NRMSE) is 7.66% and the overall Correlation Coefficient (CC) is 0.9538. The maximum value of the Normalized Absolute Error (NAE) is about 30% and occurs for time periods with highly variable meteorological conditions. - Highlights: • We developed an online service (Controller) dedicated to solar energy plants real-time monitoring • We developed an online service (Planner) that supports the planning of new solar energy plants • The services are based on the elaboration of satellite optical imagery in near real-time • The validation with respect to in-situ measured hourly AC power data for three test solar plants shows good accuracy • The maximum value of the Normalized Absolute Error is about 30% and occurs for highly variable meteorological conditions.« less
Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump
NASA Astrophysics Data System (ADS)
Gontcharov, G. A.
2017-08-01
Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
Models for estimating daily rainfall erosivity in China
NASA Astrophysics Data System (ADS)
Xie, Yun; Yin, Shui-qing; Liu, Bao-yuan; Nearing, Mark A.; Zhao, Ying
2016-04-01
The rainfall erosivity factor (R) represents the multiplication of rainfall energy and maximum 30 min intensity by event (EI30) and year. This rainfall erosivity index is widely used for empirical soil loss prediction. Its calculation, however, requires high temporal resolution rainfall data that are not readily available in many parts of the world. The purpose of this study was to parameterize models suitable for estimating erosivity from daily rainfall data, which are more widely available. One-minute resolution rainfall data recorded in sixteen stations over the eastern water erosion impacted regions of China were analyzed. The R-factor ranged from 781.9 to 8258.5 MJ mm ha-1 h-1 y-1. A total of 5942 erosive events from one-minute resolution rainfall data of ten stations were used to parameterize three models, and 4949 erosive events from the other six stations were used for validation. A threshold of daily rainfall between days classified as erosive and non-erosive was suggested to be 9.7 mm based on these data. Two of the models (I and II) used power law functions that required only daily rainfall totals. Model I used different model coefficients in the cool season (Oct.-Apr.) and warm season (May-Sept.), and Model II was fitted with a sinusoidal curve of seasonal variation. Both Model I and Model II estimated the erosivity index for average annual, yearly, and half-month temporal scales reasonably well, with the symmetric mean absolute percentage error MAPEsym ranging from 10.8% to 32.1%. Model II predicted slightly better than Model I. However, the prediction efficiency for the daily erosivity index was limited, with the symmetric mean absolute percentage error being 68.0% (Model I) and 65.7% (Model II) and Nash-Sutcliffe model efficiency being 0.55 (Model I) and 0.57 (Model II). Model III, which used the combination of daily rainfall amount and daily maximum 60-min rainfall, improved predictions significantly, and produced a Nash-Sutcliffe model efficiency for daily erosivity index prediction of 0.93. Thus daily rainfall data was generally sufficient for estimating annual average, yearly, and half-monthly time scales, while sub-daily data was needed when estimating daily erosivity values.
Comparison of the WSA-ENLIL model with three CME cone types
NASA Astrophysics Data System (ADS)
Jang, Soojeong; Moon, Y.; Na, H.
2013-07-01
We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.
Gómez, Pablo; Schützenberger, Anne; Kniesburges, Stefan; Bohr, Christopher; Döllinger, Michael
2018-06-01
This study presents a framework for a direct comparison of experimental vocal fold dynamics data to a numerical two-mass-model (2MM) by solving the corresponding inverse problem of which parameters lead to similar model behavior. The introduced 2MM features improvements such as a variable stiffness and a modified collision force. A set of physiologically sensible degrees of freedom is presented, and three optimization algorithms are compared on synthetic vocal fold trajectories. Finally, a total of 288 high-speed video recordings of six excised porcine larynges were optimized to validate the proposed framework. Particular focus lay on the subglottal pressure, as the experimental subglottal pressure is directly comparable to the model subglottal pressure. Fundamental frequency, amplitude and objective function values were also investigated. The employed 2MM is able to replicate the behavior of the porcine vocal folds very well. The model trajectories' fundamental frequency matches the one of the experimental trajectories in [Formula: see text] of the recordings. The relative error of the model trajectory amplitudes is on average [Formula: see text]. The experiments feature a mean subglottal pressure of 10.16 (SD [Formula: see text]) [Formula: see text]; in the model, it was on average 7.61 (SD [Formula: see text]) [Formula: see text]. A tendency of the model to underestimate the subglottal pressure is found, but the model is capable of inferring trends in the subglottal pressure. The average absolute error between the subglottal pressure in the model and the experiment is 2.90 (SD [Formula: see text]) [Formula: see text] or [Formula: see text]. A detailed analysis of the factors affecting the accuracy in matching the subglottal pressure is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teo, Troy; Alayoubi, Nadia; Bruce, Neil
Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons formore » the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, D; Gutierrez, A
Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
In Vivo Measurement of Pediatric Vocal Fold Motion Using Structured Light Laser Projection
Patel, Rita R.; Donohue, Kevin D.; Lau, Daniel; Unnikrishnan, Harikrishnan
2013-01-01
Summary Objective The aim of the study was to present the development of a miniature structured light laser projection endoscope and to quantify vocal fold length and vibratory features related to impact stress of the pediatric glottis using high-speed imaging. Study Design The custom-developed laser projection system consists of a green laser with a 4-mm diameter optics module at the tip of the endoscope, projecting 20 vertical laser lines on the glottis. Measurements of absolute phonatory vocal fold length, membranous vocal fold length, peak amplitude, amplitude-to-length ratio, average closing velocity, and impact velocity were obtained in five children (6–9 years), two adult male and three adult female participants without voice disorders, and one child (10 years) with bilateral vocal fold nodules during modal phonation. Results Independent measurements made on the glottal length of a vocal fold phantom demonstrated a 0.13 mm bias error with a standard deviation of 0.23 mm, indicating adequate precision and accuracy for measuring vocal fold structures and displacement. First, in vivo measurements of amplitude-to-length ratio, peak closing velocity, and impact velocity during phonation in pediatric population and a child with vocal fold nodules are reported. Conclusion The proposed laser projection system can be used to obtain in vivo measurements of absolute length and vibratory features in children and adults. Children have large amplitude-to-length ratio compared with typically developing adults, whereas nodules result in larger peak amplitude, amplitude-to-length ratio, average closing velocity, and impact velocity compared with typically developing children. PMID:23809569
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model
Li, Xiaoqing; Wang, Yu
2018-01-01
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254
Lally, Trent; Geist, James R; Yu, Qingzhao; Himel, Van T; Sabey, Kent
2015-07-01
This study compared images displayed on 1 desktop monitor, 1 laptop monitor, and 2 tablets for the detection of contrast and working length interpretation, with a null hypothesis of no differences between the devices. Three aluminum blocks, with milled circles of varying depth, were radiographed at various exposure levels to create 45 images of varying radiographic density. Six observers viewed the images on 4 devices: Lenovo M92z desktop (Lenovo, Beijing, China), Lenovo Z580 laptop (Lenovo), iPad 3 (Apple, Cupertino, CA), and iPad mini (Apple). Observers recorded the number of circles detected for each image, and a perceptibility curve was used to compare the devices. Additionally, 42 extracted teeth were imaged with working length files affixed at various levels (short, flush, and long) relative to the anatomic apex. Observers measured the distance from file tip to tooth apex on each device. The absolute mean measurement error was calculated for each image. Analysis of variance tests compared the devices. Observers repeated their sessions 1 month later to evaluate intraobserver reliability as measured with weighted kappa tests. Interclass correlation coefficients compared interobserver reliability. There was no significant difference in perceptibility detection between the Lenovo M92z desktop, iPad 3, and iPad mini. However, on average, all 3 were significantly better than the Lenovo Z580 laptop (P values ≤.015). No significant difference in the mean absolute error was noted for working length measurements among the 4 viewing devices (P = .3509). Although all 4 viewing devices seemed comparable with regard to working length evaluation, the laptop computer screen had lower overall ability to perceive contrast differences. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.
Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu
2018-01-19
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.
Application of psychometric theory to the measurement of voice quality using rating scales.
Shrivastav, Rahul; Sapienza, Christine M; Nandur, Vuday
2005-04-01
Rating scales are commonly used to study voice quality. However, recent research has demonstrated that perceptual measures of voice quality obtained using rating scales suffer from poor interjudge agreement and reliability, especially in the mid-range of the scale. These findings, along with those obtained using multidimensional scaling (MDS), have been interpreted to show that listeners perceive voice quality in an idiosyncratic manner. Based on psychometric theory, the present research explored an alternative explanation for the poor interlistener agreement observed in previous research. This approach suggests that poor agreement between listeners may result, in part, from measurement errors related to a variety of factors rather than true differences in the perception of voice quality. In this study, 10 listeners rated breathiness for 27 vowel stimuli using a 5-point rating scale. Each stimulus was presented to the listeners 10 times in random order. Interlistener agreement and reliability were calculated from these ratings. Agreement and reliability were observed to improve when multiple ratings of each stimulus from each listener were averaged and when standardized scores were used instead of absolute ratings. The probability of exact agreement was found to be approximately .9 when using averaged ratings and standardized scores. In contrast, the probability of exact agreement was only .4 when a single rating from each listener was used to measure agreement. These findings support the hypothesis that poor agreement reported in past research partly arises from errors in measurement rather than individual differences in the perception of voice quality.
Cheng, Christopher P; Parker, David; Taylor, Charles A
2002-09-01
Arterial wall shear stress is hypothesized to be an important factor in the localization of atherosclerosis. Current methods to compute wall shear stress from magnetic resonance imaging (MRI) data do not account for flow profiles characteristic of pulsatile flow in noncircular vessel lumens. We describe a method to quantify wall shear stress in large blood vessels by differentiating velocity interpolation functions defined using cine phase-contrast MRI data on a band of elements in the neighborhood of the vessel wall. Validation was performed with software phantoms and an in vitro flow phantom. At an image resolution corresponding to in vivo imaging data of the human abdominal aorta, time-averaged, spatially averaged wall shear stress for steady and pulsatile flow were determined to be within 16% and 23% of the analytic solution, respectively. These errors were reduced to 5% and 8% with doubling in image resolution. For the pulsatile software phantom, the oscillation in shear stress was predicted to within 5%. The mean absolute error of circumferentially resolved shear stress for the nonaxisymmetric phantom decreased from 28% to 15% with a doubling in image resolution. The irregularly shaped phantom and in vitro investigation demonstrated convergence of the calculated values with increased image resolution. We quantified the shear stress at the supraceliac and infrarenal regions of a human abdominal aorta to be 3.4 and 2.3 dyn/cm2, respectively.
Wang, Junmei; Hou, Tingjun
2011-01-01
In this work, we have evaluated how well the General AMBER force field (GAFF) performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, 5 organic compounds in aqueous solutions, 4 proteins in aqueous solutions, and 9 organic compounds in non-aqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned error (AUE) and the root-mean-square error (RMSE) are 0.137 and 0.171 ×10−5 cm−2s−1, respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for 8 organic solvents with experimental data (R2 = 0.784), 4 proteins in aqueous solutions (R2 = 0.996) and 9 organic compounds in non-aqueous solutions (R2 = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide (DMSO) and cyclohexane have been studied. The major MD settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement (MSD) collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. PMID:21953689
Evaluation of Greenland near surface air temperature datasets
Reeves Eyre, J. E. Jack; Zeng, Xubin
2017-07-05
Near-surface air temperature (SAT) over Greenland has important effects on mass balance of the ice sheet, but it is unclear which SAT datasets are reliable in the region. Here extensive in situ SAT measurements ( ∼ 1400 station-years) are used to assess monthly mean SAT from seven global reanalysis datasets, five gridded SAT analyses, one satellite retrieval and three dynamically downscaled reanalyses. Strengths and weaknesses of these products are identified, and their biases are found to vary by season and glaciological regime. MERRA2 reanalysis overall performs best with mean absolute error less than 2 °C in all months. Ice sheet-average annual mean SAT frommore » different datasets are highly correlated in recent decades, but their 1901–2000 trends differ even in sign. Compared with the MERRA2 climatology combined with gridded SAT analysis anomalies, thirty-one earth system model historical runs from the CMIP5 archive reach ∼ 5 °C for the 1901–2000 average bias and have opposite trends for a number of sub-periods.« less
Evaluation of Greenland near surface air temperature datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reeves Eyre, J. E. Jack; Zeng, Xubin
Near-surface air temperature (SAT) over Greenland has important effects on mass balance of the ice sheet, but it is unclear which SAT datasets are reliable in the region. Here extensive in situ SAT measurements ( ∼ 1400 station-years) are used to assess monthly mean SAT from seven global reanalysis datasets, five gridded SAT analyses, one satellite retrieval and three dynamically downscaled reanalyses. Strengths and weaknesses of these products are identified, and their biases are found to vary by season and glaciological regime. MERRA2 reanalysis overall performs best with mean absolute error less than 2 °C in all months. Ice sheet-average annual mean SAT frommore » different datasets are highly correlated in recent decades, but their 1901–2000 trends differ even in sign. Compared with the MERRA2 climatology combined with gridded SAT analysis anomalies, thirty-one earth system model historical runs from the CMIP5 archive reach ∼ 5 °C for the 1901–2000 average bias and have opposite trends for a number of sub-periods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, D. T.; Davis, A. K.; Armstrong, W.
Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less
Michel, D. T.; Davis, A. K.; Armstrong, W.; ...
2015-07-08
Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less
Observations and model predictions of water skin temperatures at MTI core site lakes and reservoirs
NASA Astrophysics Data System (ADS)
Garrett, Alfred J.; Kurzeja, Robert J.; O'Steen, Byron L.; Parker, Matthew J.; Pendergast, Malcolm M.; Villa-Aleman, Eliel; Pagnutti, Mary A.
2001-08-01
The Savannah River Technology Center (SRTC) measured water skin temperatures at four of the Multi-spectral Thermal Imager (MTI) core sites. The depression of the skin temperature relative to the bulk water temperature ((Delta) T) a few centimeters below the surface is a complex function of the weather conditions, turbulent mixing in the water and the bulk water temperature. Observed skin temperature depressions range from near zero to more than 1.0 degree(s)C. Skin temperature depressions tend to be larger when the bulk water temperature is high, but large depressions were also observed in cool bodies of water in calm conditions at night. We compared (Delta) T predictions from three models (SRTC, Schlussel and Wick) against measured (Delta) T's from 15 data sets taken at the MTI core sites. The SRTC and Wick models performed somewhat better than the Schlussel model, with RMSE and average absolute errors of about 0.2 degree(s)C, relative to 0.4 degree(s)C for the Schlussel model. The average observed (Delta) T for all 15 databases was -0.7 degree(s)C.
Parkinson Disease Detection from Speech Articulation Neuromechanics.
Gómez-Vilda, Pedro; Mekyska, Jiri; Ferrández, José M; Palacios-Alonso, Daniel; Gómez-Rodellar, Andrés; Rodellar-Biarge, Victoria; Galaz, Zoltan; Smekal, Zdenek; Eliasova, Ilona; Kostalova, Milena; Rektorova, Irena
2017-01-01
Aim: The research described is intended to give a description of articulation dynamics as a correlate of the kinematic behavior of the jaw-tongue biomechanical system, encoded as a probability distribution of an absolute joint velocity. This distribution may be used in detecting and grading speech from patients affected by neurodegenerative illnesses, as Parkinson Disease. Hypothesis: The work hypothesis is that the probability density function of the absolute joint velocity includes information on the stability of phonation when applied to sustained vowels, as well as on fluency if applied to connected speech. Methods: A dataset of sustained vowels recorded from Parkinson Disease patients is contrasted with similar recordings from normative subjects. The probability distribution of the absolute kinematic velocity of the jaw-tongue system is extracted from each utterance. A Random Least Squares Feed-Forward Network (RLSFN) has been used as a binary classifier working on the pathological and normative datasets in a leave-one-out strategy. Monte Carlo simulations have been conducted to estimate the influence of the stochastic nature of the classifier. Two datasets for each gender were tested (males and females) including 26 normative and 53 pathological subjects in the male set, and 25 normative and 38 pathological in the female set. Results: Male and female data subsets were tested in single runs, yielding equal error rates under 0.6% (Accuracy over 99.4%). Due to the stochastic nature of each experiment, Monte Carlo runs were conducted to test the reliability of the methodology. The average detection results after 200 Montecarlo runs of a 200 hyperplane hidden layer RLSFN are given in terms of Sensitivity (males: 0.9946, females: 0.9942), Specificity (males: 0.9944, females: 0.9941) and Accuracy (males: 0.9945, females: 0.9942). The area under the ROC curve is 0.9947 (males) and 0.9945 (females). The equal error rate is 0.0054 (males) and 0.0057 (females). Conclusions: The proposed methodology avails that the use of highly normalized descriptors as the probability distribution of kinematic variables of vowel articulation stability, which has some interesting properties in terms of information theory, boosts the potential of simple yet powerful classifiers in producing quite acceptable detection results in Parkinson Disease.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
Mathematical tool from corn stover TGA to determine its composition.
Freda, Cesare; Zimbardi, Francesco; Nanna, Francesco; Viola, Egidio
2012-08-01
Corn stover was treated by steam explosion process at four different temperatures. A fraction of the four exploded matters was extracted by water. The eight samples (four from steam explosion and four from water extraction of exploded matters) were analysed by wet chemical way to quantify the amount of cellulose, hemicellulose and lignin. Thermogravimetric analysis in air atmosphere was executed on the eight samples. A mathematical tool was developed, using TGA data, to determine the composition of corn stover in terms of cellulose, hemicellulose and lignin. It uses the biomass degradation temperature as multiple linear function of the cellulose, hemicellulose and lignin content of the biomass with interactive terms. The mathematical tool predicted cellulose, hemicellulose and lignin contents with average absolute errors of 1.69, 5.59 and 0.74 %, respectively, compared to the wet chemical method.
Flash radiography with 24 GeV/c protons
NASA Astrophysics Data System (ADS)
Morris, C. L.; Ables, E.; Alrick, K. R.; Aufderheide, M. B.; Barnes, P. D.; Buescher, K. L.; Cagliostro, D. J.; Clark, D. A.; Clark, D. J.; Espinoza, C. J.; Ferm, E. N.; Gallegos, R. A.; Gardner, S. D.; Gomez, J. J.; Greene, G. A.; Hanson, A.; Hartouni, E. P.; Hogan, G. E.; King, N. S. P.; Kwiatkowski, K.; Liljestrand, R. P.; Mariam, F. G.; Merrill, F. E.; Morgan, D. V.; Morley, K. B.; Mottershead, C. T.; Murray, M. M.; Pazuchanics, P. D.; Pearson, J. E.; Sarracino, J. S.; Saunders, A.; Scaduto, J.; Schach von Wittenau, A. E.; Soltz, R. A.; Sterbenz, S.; Thompson, R. T.; Vixie, K.; Wilke, M. D.; Wright, D. M.; Zumbro, J. D.
2011-05-01
The accuracy of density measurements and position resolution in flash (40 ns) radiography of thick objects with 24 Gev/c protons is investigated. A global model fit to step wedge data is shown to give a good description spanning the periodic table. The parameters obtained from the step wedge data are used to predict transmission through the French Test Object (FTO), a test object of nested spheres, to a precision better than 1%. Multiple trials have been used to show that the systematic errors are less than 2%. Absolute agreement between the average radiographic measurements of the density and the known density is 1%. Spatial resolution has been measured to be 200 μm at the center of the FTO. These data verify expectations of the benefits provided by high energy hadron radiography for thick objects.
VizieR Online Data Catalog: Catalog of Hγ measures (Petrie+ 1973)
NASA Astrophysics Data System (ADS)
Petrie, R. M.; Crampton, D.; Leir, A. Younger F.
2016-02-01
The catalog is a compilation of equivalent widths of H-γ for early-type stars, not only from published material but also from the numerous card files kept by R.M. Petrie. The luminosities of early-type stars through the measurement of the equivalent width of H-γ are relatively precise, although the early work was hampered by systematic errors in the absolute magnitude calibrations. In a number of cases, the values of the equivalent width for a given star differ slightly from publication to publication. There are three possible reasons for this: 1) The later publications may include measurements of additional spectra; 2) in some cases the values were included in the average; 3) some initial measures had not included the extremities of the very extensive wings of H-γ in the spectra of A stars. (2 data files).
NASA Astrophysics Data System (ADS)
Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj
2012-05-01
For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning
Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi
2012-03-01
Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
2013-09-01
M.4.1. Two-dimensional domains cropped out of three-dimensional numerically generated realizations; (a) 3D PCE-NAPL realizations generated by UTCHEM...165 Figure R.3.2. The absolute error vs relative error scatter plots of pM and gM from SGS data set- 4 using multi-task manifold...error scatter plots of pM and gM from TP/MC data set using multi- task manifold regression
Impact of spot charge inaccuracies in IMPT treatments.
Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M
2017-08-01
Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.
Masked and unmasked error-related potentials during continuous control and feedback
NASA Astrophysics Data System (ADS)
Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.
2018-06-01
The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR = 81.8% and average TNR = 96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR = 60.9% and average TNR = 58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.
Error analysis on spinal motion measurement using skin mounted sensors.
Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond
2008-01-01
Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.
NASA Astrophysics Data System (ADS)
Ripepi, V.; Moretti, M. I.; Clementini, G.; Marconi, M.; Cioni, M. R.; Marquette, J. B.; Tisserand, P.
2012-09-01
The Vista Magellanic Cloud (VMC, PI M.R. Cioni) survey is collecting K S -band time series photometry of the system formed by the two Magellanic Clouds (MC) and the "bridge" that connects them. These data are used to build K S -band light curves of the MC RR Lyrae stars and Classical Cepheids and determine absolute distances and the 3D geometry of the whole system using the K-band period luminosity ( PLK S ), the period-luminosity-color ( PLC) and the Wesenhiet relations applicable to these types of variables. As an example of the survey potential we present results from the VMC observations of two fields centered respectively on the South Ecliptic Pole and the 30 Doradus star forming region of the Large Magellanic Cloud. The VMC K S -band light curves of the RR Lyrae stars in these two regions have very good photometric quality with typical errors for the individual data points in the range of ˜0.02 to 0.05 mag. The Cepheids have excellent light curves (typical errors of ˜0.01 mag). The average K S magnitudes derived for both types of variables were used to derive PLK S relations that are in general good agreement within the errors with the literature data, and show a smaller scatter than previous studies.
Quan, Guo-zheng; Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng
2014-01-01
The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173 ∼ 1473 K and strain rate range of 0.01 ∼ 10 s(-1). Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of -39.99% ∼ 35.05% and -3.77% ∼ 16.74%. As for the former, only 16.3% of the test data set possesses η-values within ± 1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model.
NASA Astrophysics Data System (ADS)
Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.
2015-01-01
Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.
NASA Astrophysics Data System (ADS)
Mester, Dávid; Nagy, Péter R.; Kállay, Mihály
2018-03-01
A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.
Galloway, Joel M.; Ortiz, Roderick F.; Bales, Jerad D.; Mau, David P.
2008-01-01
Pueblo Reservoir is west of Pueblo, Colorado, and is an important water resource for southeastern Colorado. The reservoir provides irrigation, municipal, and industrial water to various entities throughout the region. In anticipation of increased population growth, the cities of Colorado Springs, Fountain, Security, and Pueblo West have proposed building a pipeline that would be capable of conveying 78 million gallons of raw water per day (240 acre-feet) from Pueblo Reservoir. The U.S. Geological Survey, in cooperation with Colorado Springs Utilities and the Bureau of Reclamation, developed, calibrated, and verified a hydrodynamic and water-quality model of Pueblo Reservoir to describe the hydrologic, chemical, and biological processes in Pueblo Reservoir that can be used to assess environmental effects in the reservoir. Hydrodynamics and water-quality characteristics in Pueblo Reservoir were simulated using a laterally averaged, two-dimensional model that was calibrated using data collected from October 1985 through September 1987. The Pueblo Reservoir model was calibrated based on vertical profiles of water temperature and dissolved-oxygen concentration, and water-quality constituent concentrations collected in the epilimnion and hypolimnion at four sites in the reservoir. The calibrated model was verified with data from October 1999 through September 2002, which included a relatively wet year (water year 2000), an average year (water year 2001), and a dry year (water year 2002). Simulated water temperatures compared well to measured water temperatures in Pueblo Reservoir from October 1985 through September 1987. Spatially, simulated water temperatures compared better to measured water temperatures in the downstream part of the reservoir than in the upstream part of the reservoir. Differences between simulated and measured water temperatures also varied through time. Simulated water temperatures were slightly less than measured water temperatures from March to May 1986 and 1987, and slightly greater than measured data in August and September 1987. Relative to the calibration period, simulated water temperatures during the verification period did not compare as well to measured water temperatures. In general, simulated dissolved-oxygen concentrations for the calibration period compared well to measured concentrations in Pueblo Reservoir. Spatially, simulated concentrations deviated more from the measured values at the downstream part of the reservoir than at other locations in the reservoir. Overall, the absolute mean error ranged from 1.05 (site 1B) to 1.42 milligrams per liter (site 7B), and the root mean square error ranged from 1.12 (site 1B) to 1.67 milligrams per liter (site 7B). Simulated dissolved oxygen in the verification period compared better to the measured concentrations than in the calibration period. The absolute mean error ranged from 0.91 (site 5C) to 1.28 milligrams per liter (site 7B), and the root mean square error ranged from 1.03 (site 5C) to 1.46 milligrams per liter (site 7B). Simulated total dissolved solids generally were less than measured total dissolved-solids concentrations in Pueblo Reservoir from October 1985 through September 1987. The largest differences between simulated and measured total dissolved solids were observed at the most downstream sites in Pueblo Reservoir during the second year of the calibration period. Total dissolved-solids data were not available from reservoir sites during the verification period, so in-reservoir specific-conductance data were compared to simulated total dissolved solids. Simulated total dissolved solids followed the same patterns through time as the measured specific conductance data during the verification period. Simulated total nitrogen concentrations compared relatively well to measured concentrations in the Pueblo Reservoir model. The absolute mean error ranged from 0.21 (site 1B) to 0.27 milligram per liter as nitrogen (sites 3B and 7
Work-related accidents among the Iranian population: a time series analysis, 2000–2011
Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774
Work-related accidents among the Iranian population: a time series analysis, 2000-2011.
Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.
Yao, Lihong; Zhu, Lihong; Wang, Junjie; Liu, Lu; Zhou, Shun; Jiang, ShuKun; Cao, Qianqian; Qu, Ang; Tian, Suqing
2015-04-26
To improve the delivery of radiotherapy in gynecologic malignancies and to minimize the irradiation of unaffected tissues by using daily kilovoltage cone beam computed tomography (kV-CBCT) to reduce setup errors. Thirteen patients with gynecologic cancers were treated with postoperative volumetric-modulated arc therapy (VMAT). All patients had a planning CT scan and daily CBCT during treatment. Automatic bone anatomy matching was used to determine initial inter-fraction positioning error. Positional correction on a six-degrees-of-freedom (6DoF) couch was followed by a second scan to calculate the residual inter-fraction error, and a post-treatment scan assessed intra-fraction motion. The margins of the planning target volume (MPTV) were calculated from these setup variations and the effect of margin size on normal tissue sparing was evaluated. In total, 573 CBCT scans were acquired. Mean absolute pre-/post-correction errors were obtained in all six planes. With 6DoF couch correction, the MPTV accounting for intra-fraction errors was reduced by 3.8-5.6 mm. This permitted a reduction in the maximum dose to the small intestine, bladder and femoral head (P=0.001, 0.035 and 0.032, respectively), the average dose to the rectum, small intestine, bladder and pelvic marrow (P=0.003, 0.000, 0.001 and 0.000, respectively) and markedly reduced irradiated normal tissue volumes. A 6DoF couch in combination with daily kV-CBCT can considerably improve positioning accuracy during VMAT treatment in gynecologic malignancies, reducing the MPTV. The reduced margin size permits improved normal tissue sparing and a smaller total irradiated volume.
Pérula de Torres, Luis Angel; Pulido Ortega, Laura; Pérula de Torres, Carlos; González Lama, Jesús; Olaya Caro, Inmaculada; Ruiz Moral, Roger
2014-10-21
To evaluate the effectiveness of an intervention based on motivational interviewing to reduce medication errors in chronic patients over 65 with polypharmacy. Cluster randomized trial that included doctors and nurses of 16 Primary Care centers and chronic patients with polypharmacy over 65 years. The professionals were assigned to the experimental or the control group using stratified randomization. Interventions consisted of training of professionals and revision of patient treatments, application of motivational interviewing in the experimental group and also the usual approach in the control group. The primary endpoint (medication error) was analyzed at individual level, and was estimated with the absolute risk reduction (ARR), relative risk reduction (RRR), number of subjects to treat (NNT) and by multiple logistic regression analysis. Thirty-two professionals were randomized (19 doctors and 13 nurses), 27 of them recruited 154 patients consecutively (13 professionals in the experimental group recruited 70 patients and 14 professionals recruited 84 patients in the control group) and completed 6 months of follow-up. The mean age of patients was 76 years (68.8% women). A decrease in the average of medication errors was observed along the period. The reduction was greater in the experimental than in the control group (F=5.109, P=.035). RRA 29% (95% confidence interval [95% CI] 15.0-43.0%), RRR 0.59 (95% CI:0.31-0.76), and NNT 3.5 (95% CI 2.3-6.8). Motivational interviewing is more efficient than the usual approach to reduce medication errors in patients over 65 with polypharmacy. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.
Zhang, Jiamei; Wang, Yan; Chen, Xiaoqin
2016-04-01
To evaluate and compare refractive outcomes of moderate- and high-astigmatism correction after wavefront-guided laser in situ keratomileusis (LASIK) and small-incision lenticule extraction (SMILE). This comparative study enrolled a total of 64 eyes that had undergone SMILE (42 eyes) and wavefront-guided LASIK (22 eyes). Preoperative cylindrical diopters were ≤-2.25 D in moderate- and >-2.25 D in high-astigmatism subgroups. The refractive results were analyzed based on the Alpins vector method that included target-induced astigmatism, surgically induced astigmatism, difference vector, correction index, index of success, magnitude of error, angle of error, and flattening index. All subjects completed the 3-month follow-up. No significant differences were found in the target-induced astigmatism, surgically induced astigmatism, and difference vector between SMILE and wavefront-guided LASIK. However, the average angle of error value was -1.00 ± 3.16 after wavefront-guided LASIK and 1.22 ± 3.85 after SMILE with statistical significance (P < 0.05). The absolute angle of error value was statistically correlated with difference vector and index of success after both procedures. In the moderate-astigmatism group, correction index was 1.04 ± 0.15 after wavefront-guided LASIK and 0.88 ± 0.15 after SMILE (P < 0.05). However, in the high-astigmatism group, correction index was 0.87 ± 0.13 after wavefront-guided LASIK and 0.88 ± 0.12 after SMILE (P = 0.889). Both procedures showed preferable outcomes in the correction of moderate and high astigmatism. However, high astigmatism was undercorrected after both procedures. Axial error of astigmatic correction may be one of the potential factors for the undercorrection.
Poster Presentation: Optical Test of NGST Developmental Mirrors
NASA Technical Reports Server (NTRS)
Hadaway, James B.; Geary, Joseph; Reardon, Patrick; Peters, Bruce; Keidel, John; Chavers, Greg
2000-01-01
An Optical Testing System (OTS) has been developed to measure the figure and radius of curvature of NGST developmental mirrors in the vacuum, cryogenic environment of the X-Ray Calibration Facility (XRCF) at Marshall Space Flight Center (MSFC). The OTS consists of a WaveScope Shack-Hartmann sensor from Adaptive Optics Associates as the main instrument, a Point Diffraction Interferometer (PDI), a Point Spread Function (PSF) imager, an alignment system, a Leica Disto Pro distance measurement instrument, and a laser source palette (632.8 nm wavelength) that is fiber-coupled to the sensor instruments. All of the instruments except the laser source palette are located on a single breadboard known as the Wavefront Sensor Pallet (WSP). The WSP is located on top of a 5-DOF motion system located at the center of curvature of the test mirror. Two PC's are used to control the OTS. The error in the figure measurement is dominated by the WaveScope's measurement error. An analysis using the absolute wavefront gradient error of 1/50 wave P-V (at 0.6328 microns) provided by the manufacturer leads to a total surface figure measurement error of approximately 1/100 wave rms. This easily meets the requirement of 1/10 wave P-V. The error in radius of curvature is dominated by the Leica's absolute measurement error of VI.5 mm and the focus setting error of Vi.4 mm, giving an overall error of V2 mm. The OTS is currently being used to test the NGST Mirror System Demonstrators (NMSD's) and the Subscale Beryllium Mirror Demonstrator (SBNM).
Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?
Kiernan, D; Hosking, J; O'Brien, T
2016-03-01
Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor
NASA Astrophysics Data System (ADS)
Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.
2017-08-01
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2016-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Corsica: A Multi-Mission Absolute Calibration Site
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.
2013-09-01
In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.
Artificial neural network modelling of a large-scale wastewater treatment plant operation.
Güçlü, Dünyamin; Dursun, Sükrü
2010-11-01
Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.
Hahn, David K; RaghuVeer, Krishans; Ortiz, J V
2014-05-15
Time-dependent density functional theory (TD-DFT) and electron propagator theory (EPT) are used to calculate the electronic transition energies and ionization energies, respectively, of species containing phosphorus or sulfur. The accuracy of TD-DFT and EPT, in conjunction with various basis sets, is assessed with data from gas-phase spectroscopy. TD-DFT is tested using 11 prominent exchange-correlation functionals on a set of 37 vertical and 19 adiabatic transitions. For vertical transitions, TD-CAM-B3LYP calculations performed with the MG3S basis set are lowest in overall error, having a mean absolute deviation from experiment of 0.22 eV, or 0.23 eV over valence transitions and 0.21 eV over Rydberg transitions. Using a larger basis set, aug-pc3, improves accuracy over the valence transitions via hybrid functionals, but improved accuracy over the Rydberg transitions is only obtained via the BMK functional. For adiabatic transitions, all hybrid functionals paired with the MG3S basis set perform well, and B98 is best, with a mean absolute deviation from experiment of 0.09 eV. The testing of EPT used the Outer Valence Green's Function (OVGF) approximation and the Partial Third Order (P3) approximation on 37 vertical first ionization energies. It is found that OVGF outperforms P3 when basis sets of at least triple-ζ quality in the polarization functions are used. The largest basis set used in this study, aug-pc3, obtained the best mean absolute error from both methods -0.08 eV for OVGF and 0.18 eV for P3. The OVGF/6-31+G(2df,p) level of theory is particularly cost-effective, yielding a mean absolute error of 0.11 eV.
NASA Astrophysics Data System (ADS)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Unique strain history during ejection in canine left ventricle.
Douglas, A S; Rodriguez, E K; O'Dell, W; Hunter, W C
1991-05-01
Understanding the relationship between structure and function in the heart requires a knowledge of the connection between the local behavior of the myocardium (e.g., shortening) and the pumping action of the left ventricle. We asked the question, how do changes in preload and afterload affect the relationship between local myocardial deformation and ventricular volume? To study this, a set of small radiopaque beads was implanted in approximately 1 cm3 of the isolated canine heart left ventricular free wall. Using biplane cineradiography, we tracked the motion of these markers through various cardiac cycles (controlling pre- and afterload) using the relative motion of six markers to quantify the local three dimensional Lagrangian strain. Two different reference states (used to define the strains) were considered. First, we used the configuration of the heart at end diastole for that particular cardiac cycle to define the individual strains (which gave the local "shortening fraction") and the ejection fraction. Second, we used a single reference state for all cardiac cycles i.e., the end-diastolic state at maximum volume, to define absolute strains (which gave local fractional length) and the volume fraction. The individual strain versus ejection fraction trajectories were dependent on preload and afterload. For any one heart, however, each component of absolute strain was more tightly correlated to volume fraction. Around each linear regression, the individual measurements of absolute strain scattered with standard errors that averaged less than 7% of their range. Thus the canine hearts examined had a preferred kinematic (shape) history during ejection, different from the kinematics of filling and independent or pre-or afterload and of stroke volume.
Patient-specific cardiac phantom for clinical training and preprocedure surgical planning.
Laing, Justin; Moore, John; Vassallo, Reid; Bainbridge, Daniel; Drangova, Maria; Peters, Terry
2018-04-01
Minimally invasive mitral valve repair procedures including MitraClip ® are becoming increasingly common. For cases of complex or diseased anatomy, clinicians may benefit from using a patient-specific cardiac phantom for training, surgical planning, and the validation of devices or techniques. An imaging compatible cardiac phantom was developed to simulate a MitraClip ® procedure. The phantom contained a patient-specific cardiac model manufactured using tissue mimicking materials. To evaluate accuracy, the patient-specific model was imaged using computed tomography (CT), segmented, and the resulting point cloud dataset was compared using absolute distance to the original patient data. The result, when comparing the molded model point cloud to the original dataset, resulted in a maximum Euclidean distance error of 7.7 mm, an average error of 0.98 mm, and a standard deviation of 0.91 mm. The phantom was validated using a MitraClip ® device to ensure anatomical features and tools are identifiable under image guidance. Patient-specific cardiac phantoms may allow for surgical complications to be accounted for preoperative planning. The information gained by clinicians involved in planning and performing the procedure should lead to shorter procedural times and better outcomes for patients.
NASA Astrophysics Data System (ADS)
Cawiding, Olive R.; Natividad, Gina May R.; Bato, Crisostomo V.; Addawe, Rizavel C.
2017-11-01
The prevalence of typhoid fever in developing countries such as the Philippines calls for a need for accurate forecasting of the disease. This will be of great assistance in strategic disease prevention. This paper presents a development of useful models that predict the behavior of typhoid fever incidence based on the monthly incidence in the provinces of the Cordillera Administrative Region from 2010 to 2015 using univariate time series analysis. The data used was obtained from the Cordillera Office of the Department of Health (DOH-CAR). Seasonal autoregressive moving average (SARIMA) models were used to incorporate the seasonality of the data. A comparison of the results of the obtained models revealed that the SARIMA (1,1,7)(0,0,1)12 with a fixed coefficient at the seventh lag produces the smallest root mean square error (RMSE), mean absolute error (MAE), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). The model suggested that for the year 2016, the number of cases would increase from the months of July to September and have a drop in December. This was then validated using the data collected from January 2016 to December 2016.
NASA Astrophysics Data System (ADS)
Wang, Wen-Chuan; Chau, Kwok-Wing; Cheng, Chun-Tian; Qiu, Lin
2009-08-01
SummaryDeveloping a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R), Nash-Sutcliffe efficiency coefficient ( E), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.
Aging and the Visual Perception of Motion Direction: Solving the Aperture Problem.
Shain, Lindsey M; Norman, J Farley
2018-07-01
An experiment required younger and older adults to estimate coherent visual motion direction from multiple motion signals, where each motion signal was locally ambiguous with respect to the true direction of pattern motion. Thus, accurate performance required the successful integration of motion signals across space (i.e., accurate performance required solution of the aperture problem) . The observers viewed arrays of either 64 or 9 moving line segments; because these lines moved behind apertures, their individual local motions were ambiguous with respect to direction (i.e., were subject to the aperture problem). Following 2.4 seconds of pattern motion on each trial (true motion directions ranged over the entire range of 360° in the fronto-parallel plane), the observers estimated the coherent direction of motion. There was an effect of direction, such that cardinal directions of pattern motion were judged with less error than oblique directions. In addition, a large effect of aging occurred-The average absolute errors of the older observers were 46% and 30.4% higher in magnitude than those exhibited by the younger observers for the 64 and 9 aperture conditions, respectively. Finally, the observers' precision markedly deteriorated as the number of apertures was reduced from 64 to 9.
Predicting online ratings based on the opinion spreading process
NASA Astrophysics Data System (ADS)
He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo
2015-10-01
Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
A New Black Carbon Sensor for Dense Air Quality Monitoring Networks
Caubel, Julien J.; Cados, Troy E.; Kirchstetter, Thomas W.
2018-01-01
Low-cost air pollution sensors are emerging and increasingly being deployed in densely distributed wireless networks that provide more spatial resolution than is typical in traditional monitoring of ambient air quality. However, a low-cost option to measure black carbon (BC)—a major component of particulate matter pollution associated with adverse human health risks—is missing. This paper presents a new BC sensor designed to fill this gap, the Aerosol Black Carbon Detector (ABCD), which incorporates a compact weatherproof enclosure, solar-powered rechargeable battery, and cellular communication to enable long-term, remote operation. This paper also demonstrates a data processing methodology that reduces the ABCD’s sensitivity to ambient temperature fluctuations, and therefore improves measurement performance in unconditioned operating environments (e.g., outdoors). A fleet of over 100 ABCDs was operated outdoors in collocation with a commercial BC instrument (Magee Scientific, Model AE33) housed inside a regulatory air quality monitoring station. The measurement performance of the 105 ABCDs is comparable to the AE33. The fleet-average precision and accuracy, expressed in terms of mean absolute percentage error, are 9.2 ± 0.8% (relative to the fleet average data) and 24.6 ± 0.9% (relative to the AE33 data), respectively (fleet-average ± 90% confidence interval). PMID:29494528
A New Black Carbon Sensor for Dense Air Quality Monitoring Networks.
Caubel, Julien J; Cados, Troy E; Kirchstetter, Thomas W
2018-03-01
Low-cost air pollution sensors are emerging and increasingly being deployed in densely distributed wireless networks that provide more spatial resolution than is typical in traditional monitoring of ambient air quality. However, a low-cost option to measure black carbon (BC)-a major component of particulate matter pollution associated with adverse human health risks-is missing. This paper presents a new BC sensor designed to fill this gap, the Aerosol Black Carbon Detector (ABCD), which incorporates a compact weatherproof enclosure, solar-powered rechargeable battery, and cellular communication to enable long-term, remote operation. This paper also demonstrates a data processing methodology that reduces the ABCD's sensitivity to ambient temperature fluctuations, and therefore improves measurement performance in unconditioned operating environments (e.g., outdoors). A fleet of over 100 ABCDs was operated outdoors in collocation with a commercial BC instrument (Magee Scientific, Model AE33) housed inside a regulatory air quality monitoring station. The measurement performance of the 105 ABCDs is comparable to the AE33. The fleet-average precision and accuracy, expressed in terms of mean absolute percentage error, are 9.2 ± 0.8% (relative to the fleet average data) and 24.6 ± 0.9% (relative to the AE33 data), respectively (fleet-average ± 90% confidence interval).
3D prostate MR-TRUS non-rigid registration using dual optimization with volume-preserving constraint
NASA Astrophysics Data System (ADS)
Qiu, Wu; Yuan, Jing; Fenster, Aaron
2016-03-01
We introduce an efficient and novel convex optimization-based approach to the challenging non-rigid registration of 3D prostate magnetic resonance (MR) and transrectal ultrasound (TRUS) images, which incorporates a new volume preserving constraint to essentially improve the accuracy of targeting suspicious regions during the 3D TRUS guided prostate biopsy. Especially, we propose a fast sequential convex optimization scheme to efficiently minimize the employed highly nonlinear image fidelity function using the robust multi-channel modality independent neighborhood descriptor (MIND) across the two modalities of MR and TRUS. The registration accuracy was evaluated using 10 patient images by calculating the target registration error (TRE) using manually identified corresponding intrinsic fiducials in the whole prostate gland. We also compared the MR and TRUS manually segmented prostate surfaces in the registered images in terms of the Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). Experimental results showed that the proposed method with the introduced volume-preserving prior significantly improves the registration accuracy comparing to the method without the volume-preserving constraint, by yielding an overall mean TRE of 2:0+/-0:7 mm, and an average DSC of 86:5+/-3:5%, MAD of 1:4+/-0:6 mm and MAXD of 6:5+/-3:5 mm.
Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald
2018-06-13
Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
Forecasting air quality time series using deep learning.
Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse
2018-04-13
This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution concentration while only monitoring key parameters and without transforming the data set in its entirety, thus allowing real time inputs and continuous prediction.
NASA Astrophysics Data System (ADS)
Mercer, Jason J.; Westbrook, Cherie J.
2016-11-01
Microform is important in understanding wetland functions and processes. But collecting imagery of and mapping the physical structure of peatlands is often expensive and requires specialized equipment. We assessed the utility of coupling computer vision-based structure from motion with multiview stereo photogrammetry (SfM-MVS) and ground-based photos to map peatland topography. The SfM-MVS technique was tested on an alpine peatland in Banff National Park, Canada, and guidance was provided on minimizing errors. We found that coupling SfM-MVS with ground-based photos taken with a point and shoot camera is a viable and competitive technique for generating ultrahigh-resolution elevations (i.e., <0.01 m, mean absolute error of 0.083 m). In evaluating 100+ viable SfM-MVS data collection and processing scenarios, vegetation was found to considerably influence accuracy. Vegetation class, when accounted for, reduced absolute error by as much as 50%. The logistic flexibility of ground-based SfM-MVS paired with its high resolution, low error, and low cost makes it a research area worth developing as well as a useful addition to the wetland scientists' toolkit.
NASA Astrophysics Data System (ADS)
Romano, M.; Mays, M. L.; Taktakishvili, A.; MacNeice, P. J.; Zheng, Y.; Pulkkinen, A. A.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Modeling coronal mass ejections (CMEs) is of great interest to the space weather research and forecasting communities. We present recent validation work of real-time CME arrival time predictions at different satellites using the WSA-ENLIL+Cone three-dimensional MHD heliospheric model available at the Community Coordinated Modeling Center (CCMC) and performed by the Space Weather Research Center (SWRC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. The quality of model operation is evaluated by comparing its output to a measurable parameter of interest such as the CME arrival time and geomagnetic storm strength. The Kp index is calculated from the relation given in Newell et al. (2007), using solar wind parameters predicted by the WSA-ENLIL+Cone model at Earth. The CME arrival time error is defined as the difference between the predicted arrival time and the observed in-situ CME shock arrival time at the ACE, STEREO A, or STEREO B spacecraft. This study includes all real-time WSA-ENLIL+Cone model simulations performed between June 2011-2013 (over 400 runs) at the CCMC/SWRC. We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we show the average absolute CME arrival time error, and the dependence of this error on CME input parameters such as speed, width, and direction. We also present the predicted geomagnetic storm strength (using the Kp index) error for Earth-directed CMEs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
Use of scan overlap redundancy to enhance multispectral aircraft scanner data
NASA Technical Reports Server (NTRS)
Lindenlaub, J. C.; Keat, J.
1973-01-01
Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H
Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less
Kim, Yongbok; Modrick, Joseph M.; Pennington, Edward C.
2016-01-01
The objective of this work is to present commissioning procedures to clinically implement a three‐dimensional (3D), image‐based, treatment‐planning system (TPS) for high‐dose‐rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8‐1.0 mm on MRI when compared with X‐rays. In‐house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose‐volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image‐based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End‐to‐end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image‐based TPS for HDR BT for GYN cancer. PACS number(s): 87.55.D‐ PMID:27074463
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian
2013-01-01
Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to the conditions of operation. PMID:24260324
NASA Astrophysics Data System (ADS)
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
Validation of the ASTER Global Digital Elevation Model Version 2 over the conterminous United States
Gesch, Dean B.; Oimoen, Michael J.; Zhang, Zheng; Meyer, David J.; Danielson, Jeffrey J.
2012-01-01
The ASTER Global Digital Elevation Model Version 2 (GDEM v2) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of -0.20 meters is a significant improvement over the GDEM v1 mean error of -3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures) cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height), GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.
Forecast models for suicide: Time-series analysis with data from Italy.
Preti, Antonio; Lentini, Gianluca
2016-01-01
The prediction of suicidal behavior is a complex task. To fine-tune targeted preventative interventions, predictive analytics (i.e. forecasting future risk of suicide) is more important than exploratory data analysis (pattern recognition, e.g. detection of seasonality in suicide time series). This study sets out to investigate the accuracy of forecasting models of suicide for men and women. A total of 101 499 male suicides and of 39 681 female suicides - occurred in Italy from 1969 to 2003 - were investigated. In order to apply the forecasting model and test its accuracy, the time series were split into a training set (1969 to 1996; 336 months) and a test set (1997 to 2003; 84 months). The main outcome was the accuracy of forecasting models on the monthly number of suicides. These measures of accuracy were used: mean absolute error; root mean squared error; mean absolute percentage error; mean absolute scaled error. In both male and female suicides a change in the trend pattern was observed, with an increase from 1969 onwards to reach a maximum around 1990 and decrease thereafter. The variances attributable to the seasonal and trend components were, respectively, 24% and 64% in male suicides, and 28% and 41% in female ones. Both annual and seasonal historical trends of monthly data contributed to forecast future trends of suicide with a margin of error around 10%. The finding is clearer in male than in female time series of suicide. The main conclusion of the study is that models taking seasonality into account seem to be able to derive information on deviation from the mean when this occurs as a zenith, but they fail to reproduce it when it occurs as a nadir. Preventative efforts should concentrate on the factors that influence the occurrence of increases above the main trend in both seasonal and cyclic patterns of suicides.
A prediction model of short-term ionospheric foF2 based on AdaBoost
NASA Astrophysics Data System (ADS)
Zhao, Xiukuan; Ning, Baiqi; Liu, Libo; Song, Gangbing
2014-02-01
In this paper, the AdaBoost-BP algorithm is used to construct a new model to predict the critical frequency of the ionospheric F2-layer (foF2) one hour ahead. Different indices were used to characterize ionospheric diurnal and seasonal variations and their dependence on solar and geomagnetic activity. These indices, together with the current observed foF2 value, were input into the prediction model and the foF2 value at one hour ahead was output. We analyzed twenty-two years' foF2 data from nine ionosonde stations in the East-Asian sector in this work. The first eleven years' data were used as a training dataset and the second eleven years' data were used as a testing dataset. The results show that the performance of AdaBoost-BP is better than those of BP Neural Network (BPNN), Support Vector Regression (SVR) and the IRI model. For example, the AdaBoost-BP prediction absolute error of foF2 at Irkutsk station (a middle latitude station) is 0.32 MHz, which is better than 0.34 MHz from BPNN, 0.35 MHz from SVR and also significantly outperforms the IRI model whose absolute error is 0.64 MHz. Meanwhile, AdaBoost-BP prediction absolute error at Taipei station from the low latitude is 0.78 MHz, which is better than 0.81 MHz from BPNN, 0.81 MHz from SVR and 1.37 MHz from the IRI model. Finally, the variety characteristics of the AdaBoost-BP prediction error along with seasonal variation, solar activity and latitude variation were also discussed in the paper.
In Vivo measurement of pediatric vocal fold motion using structured light laser projection.
Patel, Rita R; Donohue, Kevin D; Lau, Daniel; Unnikrishnan, Harikrishnan
2013-07-01
The aim of the study was to present the development of a miniature structured light laser projection endoscope and to quantify vocal fold length and vibratory features related to impact stress of the pediatric glottis using high-speed imaging. The custom-developed laser projection system consists of a green laser with a 4-mm diameter optics module at the tip of the endoscope, projecting 20 vertical laser lines on the glottis. Measurements of absolute phonatory vocal fold length, membranous vocal fold length, peak amplitude, amplitude-to-length ratio, average closing velocity, and impact velocity were obtained in five children (6-9 years), two adult male and three adult female participants without voice disorders, and one child (10 years) with bilateral vocal fold nodules during modal phonation. Independent measurements made on the glottal length of a vocal fold phantom demonstrated a 0.13mm bias error with a standard deviation of 0.23mm, indicating adequate precision and accuracy for measuring vocal fold structures and displacement. First, in vivo measurements of amplitude-to-length ratio, peak closing velocity, and impact velocity during phonation in pediatric population and a child with vocal fold nodules are reported. The proposed laser projection system can be used to obtain in vivo measurements of absolute length and vibratory features in children and adults. Children have large amplitude-to-length ratio compared with typically developing adults, whereas nodules result in larger peak amplitude, amplitude-to-length ratio, average closing velocity, and impact velocity compared with typically developing children. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Accurate, robust and reliable calculations of Poisson-Boltzmann binding energies
Nguyen, Duc D.; Wang, Bao
2017-01-01
Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, ΔGel, and binding free energy, ΔΔGel, is important to computational biophysics and biochemistry. In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of ΔGel obtained at the grid spacing of 1.0 Å compared to ΔGel at 0.2 Å averaged over 153 molecules is less than 0.2%. Our results indicate that the use of grid spacing 0.6 Å ensures accuracy and reliability in ΔΔGel calculation. In fact, the grid spacing of 1.1 Å appears to deliver adequate accuracy for high throughput screening. PMID:28211071
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
Short-term acoustic forecasting via artificial neural networks for neonatal intensive care units.
Young, Jason; Macke, Christopher J; Tsoukalas, Lefteri H
2012-11-01
Noise levels in hospitals, especially neonatal intensive care units (NICUs), have become of great concern for hospital designers. This paper details an artificial neural network (ANN) approach to forecasting the sound loads in NICUs. The ANN is used to learn the relationship between past, present, and future noise levels. By training the ANN with data specific to the location and device used to measure the sound, the ANN is able to produce reasonable predictions of noise levels in the NICU. Best case results show average absolute errors of 5.06 ± 4.04% when used to predict the noise levels one hour ahead, which correspond to 2.53 dBA ± 2.02 dBA. The ANN has the tendency to overpredict during periods of stability and underpredict during large transients. This forecasting algorithm could be of use in any application where prediction and prevention of harmful noise levels are of the utmost concern.
Fixing the reference frame for PPMXL proper motions using extragalactic sources
Grabowski, Kathleen; Carlin, Jeffrey L.; Newberg, Heidi Jo; ...
2015-05-27
In this study, we quantify and correct systematic errors in PPMXL proper motions using extragalactic sources from the first two LAMOST data releases and the Vèron-Cetty & Vèron Catalog of Quasars. Although the majority of the sources are from the Vèron catalog, LAMOST makes important contributions in regions that are not well-sampled by previous catalogs, particularly at low Galactic latitudes and in the south Galactic cap. We show that quasars in PPMXL have measurable and significant proper motions, which reflect the systematic zero-point offsets present in the catalog. We confirm the global proper motion shifts seen by Wu et al.,more » and additionally find smaller-scale fluctuations of the QSO-derived corrections to an absolute frame. Finally, we average the proper motions of 158 106 extragalactic objects in bins of 3° × 3° and present a table of proper motion corrections.« less
High Spectral Resolution Lidar for atmospheric temperature profiling.
NASA Astrophysics Data System (ADS)
Razenkov, I.; Eloranta, E. W.
2017-12-01
The High Spectral Resolution Lidar (HSRL) designed at the University of Wisconsin-Madison is equipped with two iodine absorption filters with different line widths (1.8 GHz and 2.85 GHz). The filters are implemented to discriminate between Mie and Rayleigh backscattering and to resolve temperature sensitive changes in Rayleigh spectrum for atmospheric temperature profile measurements. This measurement capability makes the instrument intrinsically and absolutely calibrated. HSRL has a shared transmitter-receiver telescope and operates in the eye-safe mode with the product of laser average power and telescope aperture less than 0.025 𝑊𝑚2 at 532 nm. With this low-power prototype instrument we have achieved temperature profile measurements extending above tropopause with a time resolution of several hours. Further instrument optimizations will reduce systematic measurement errors and will improve a signal-to-noise ratio providing temperature data comparable to a standard radiosonde with higher time resolution.
A dual-adaptive support-based stereo matching algorithm
NASA Astrophysics Data System (ADS)
Zhang, Yin; Zhang, Yun
2017-07-01
Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.
1976-01-01
The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.
Hyde, Derek; Lochray, Fiona; Korol, Renee; Davidson, Melanie; Wong, C Shun; Ma, Lijun; Sahgal, Arjun
2012-03-01
To evaluate the residual setup error and intrafraction motion following kilovoltage cone-beam CT (CBCT) image guidance, for immobilized spine stereotactic body radiotherapy (SBRT) patients, with positioning corrected for in all six degrees of freedom. Analysis is based on 42 consecutive patients (48 thoracic and/or lumbar metastases) treated with a total of 106 fractions and 307 image registrations. Following initial setup, a CBCT was acquired for patient alignment and a pretreatment CBCT taken to verify shifts and determine the residual setup error, followed by a midtreatment and posttreatment CBCT image. For 13 single-fraction SBRT patients, two midtreatment CBCT images were obtained. Initially, a 1.5-mm and 1° tolerance was used to reposition the patient following couch shifts which was subsequently reduced to 1 mm and 1° degree after the first 10 patients. Small positioning errors after the initial CBCT setup were observed, with 90% occurring within 1 mm and 97% within 1°. In analyzing the impact of the time interval for verification imaging (10 ± 3 min) and subsequent image acquisitions (17 ± 4 min), the residual setup error was not significantly different (p > 0.05). A significant difference (p = 0.04) in the average three-dimensional intrafraction positional deviations favoring a more strict tolerance in translation (1 mm vs. 1.5 mm) was observed. The absolute intrafraction motion averaged over all patients and all directions along x, y, and z axis (± SD) were 0.7 ± 0.5 mm and 0.5 ± 0.4 mm for the 1.5 mm and 1 mm tolerance, respectively. Based on a 1-mm and 1° correction threshold, the target was localized to within 1.2 mm and 0.9° with 95% confidence. Near-rigid body immobilization, intrafraction CBCT imaging approximately every 15-20 min, and strict repositioning thresholds in six degrees of freedom yields minimal intrafraction motion allowing for safe spine SBRT delivery. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, A; Contee, C; White, B
Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less
NASA Astrophysics Data System (ADS)
Le Meur, Emmanuel; Magand, Olivier; Arnaud, Laurent; Fily, Michel; Frezzotti, Massimo; Cavitte, Marie; Mulvaney, Robert; Urbini, Stefano
2018-05-01
Results from ground-penetrating radar (GPR) measurements and shallow ice cores carried out during a scientific traverse between Dome Concordia (DC) and Vostok stations are presented in order to infer both spatial and temporal characteristics of snow accumulation over the East Antarctic Plateau. Spatially continuous accumulation rates along the traverse are computed from the identification of three equally spaced radar reflections spanning about the last 600 years. Accurate dating of these internal reflection horizons (IRHs) is obtained from a depth-age relationship derived from volcanic horizons and bomb testing fallouts on a DC ice core and shows a very good consistency when tested against extra ice cores drilled along the radar profile. Accumulation rates are then inferred by accounting for density profiles down to each IRH. For the latter purpose, a careful error analysis showed that using a single and more accurate density profile along a DC core provided more reliable results than trying to include the potential spatial variability in density from extra (but less accurate) ice cores distributed along the profile. The most striking feature is an accumulation pattern that remains constant through time with persistent gradients such as a marked decrease from 26 mm w.e. yr-1 at DC to 20 mm w.e. yr-1 at the south-west end of the profile over the last 234 years on average (with a similar decrease from 25 to 19 mm w.e. yr-1 over the last 592 years). As for the time dependency, despite an overall consistency with similar measurements carried out along the main East Antarctic divides, interpreting possible trends remains difficult. Indeed, error bars in our measurements are still too large to unambiguously infer an apparent time increase in accumulation rate. For the proposed absolute values, maximum margins of error are in the range 4 mm w.e. yr-1 (last 234 years) to 2 mm w.e. yr-1 (last 592 years), a decrease with depth mainly resulting from the time-averaging when computing accumulation rates.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni; Montagnese, Sara
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints.
NASA Astrophysics Data System (ADS)
Ali, Mumtaz; Deo, Ravinesh C.; Downs, Nathan J.; Maraseni, Tek
2018-07-01
Forecasting drought by means of the World Meteorological Organization-approved Standardized Precipitation Index (SPI) is considered to be a fundamental task to support socio-economic initiatives and effectively mitigating the climate-risk. This study aims to develop a robust drought modelling strategy to forecast multi-scalar SPI in drought-rich regions of Pakistan where statistically significant lagged combinations of antecedent SPI are used to forecast future SPI. With ensemble-Adaptive Neuro Fuzzy Inference System ('ensemble-ANFIS') executed via a 10-fold cross-validation procedure, a model is constructed by randomly partitioned input-target data. Resulting in 10-member ensemble-ANFIS outputs, judged by mean square error and correlation coefficient in the training period, the optimal forecasts are attained by the averaged simulations, and the model is benchmarked with M5 Model Tree and Minimax Probability Machine Regression (MPMR). The results show the proposed ensemble-ANFIS model's preciseness was notably better (in terms of the root mean square and mean absolute error including the Willmott's, Nash-Sutcliffe and Legates McCabe's index) for the 6- and 12- month compared to the 3-month forecasts as verified by the largest error proportions that registered in smallest error band. Applying 10-member simulations, ensemble-ANFIS model was validated for its ability to forecast severity (S), duration (D) and intensity (I) of drought (including the error bound). This enabled uncertainty between multi-models to be rationalized more efficiently, leading to a reduction in forecast error caused by stochasticity in drought behaviours. Through cross-validations at diverse sites, a geographic signature in modelled uncertainties was also calculated. Considering the superiority of ensemble-ANFIS approach and its ability to generate uncertainty-based information, the study advocates the versatility of a multi-model approach for drought-risk forecasting and its prime importance for estimating drought properties over confidence intervals to generate better information for strategic decision-making.
NASA Astrophysics Data System (ADS)
Appleby, Graham; Rodríguez, José; Altamimi, Zuheir
2016-12-01
Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R.; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints. PMID:28666029
The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.
1985-05-08
also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies
Wang, Junmei; Hou, Tingjun
2011-12-01
In this work, we have evaluated how well the general assisted model building with energy refinement (AMBER) force field performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, five organic compounds in aqueous solutions, four proteins in aqueous solutions, and nine organic compounds in nonaqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned errors and the root mean square errors are 0.137 and 0.171 × 10(-5) cm(-2) s(-1), respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for eight organic solvents with experimental data (R(2) = 0.784), four proteins in aqueous solutions (R(2) = 0.996), and nine organic compounds in nonaqueous solutions (R(2) = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide, and cyclohexane have been studied. The major molecular dynamics (MD) settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. Copyright © 2011 Wiley Periodicals, Inc.
Sando, Roy; Chase, Katherine J.
2017-03-23
A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Hair, Jason; McAndrew, Brendan; Jennings, Don; Rabin, Douglas; Daw, Adrian; Lundsford, Allen
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission key goals include enabling observation of high accuracy long-term climate change trends, use of these observations to test and improve climate forecasts, and calibration of operational and research sensors. The spaceborne instrument suites include a reflected solar spectroradiometer, emitted infrared spectroradiometer, and radio occultation receivers. The requirement for the RS instrument is that derived reflectance must be traceable to Sl standards with an absolute uncertainty of <0.3% and the error budget that achieves this requirement is described in previo1L5 work. This work describes the Solar/Lunar Absolute Reflectance Imaging Spectroradiometer (SOLARIS), a calibration demonstration system for RS instrument, and presents initial calibration and characterization methods and results. SOLARIS is an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm over a full field-of-view of 10 degrees with 0.27 milliradian sampling. Results from laboratory measurements including use of integrating spheres, transfer radiometers and spectral standards combined with field-based solar and lunar acquisitions are presented. These results will be used to assess the accuracy and repeatability of the radiometric and spectral characteristics of SOLARIS, which will be presented against the sensor-level requirements addressed in the CLARREO RS instrument error budget.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254
Application of Intra-Oral Dental Scanners in the Digital Workflow of Implantology
van der Meer, Wicher J.; Andriessen, Frank S.; Wismeijer, Daniel; Ren, Yijin
2012-01-01
Intra-oral scanners will play a central role in digital dentistry in the near future. In this study the accuracy of three intra-oral scanners was compared. Materials and methods: A master model made of stone was fitted with three high precision manufactured PEEK cylinders and scanned with three intra-oral scanners: the CEREC (Sirona), the iTero (Cadent) and the Lava COS (3M). In software the digital files were imported and the distance between the centres of the cylinders and the angulation between the cylinders was assessed. These values were compared to the measurements made on a high accuracy 3D scan of the master model. Results: The distance errors were the smallest and most consistent for the Lava COS. The distance errors for the Cerec were the largest and least consistent. All the angulation errors were small. Conclusions: The Lava COS in combination with a high accuracy scanning protocol resulted in the smallest and most consistent errors of all three scanners tested when considering mean distance errors in full arch impressions both in absolute values and in consistency for both measured distances. For the mean angulation errors, the Lava COS had the smallest errors between cylinders 1–2 and the largest errors between cylinders 1–3, although the absolute difference with the smallest mean value (iTero) was very small (0,0529°). An expected increase in distance and/or angular errors over the length of the arch due to an accumulation of registration errors of the patched 3D surfaces could be observed in this study design, but the effects were statistically not significant. Clinical relevance For making impressions of implant cases for digital workflows, the most accurate scanner with the scanning protocol that will ensure the most accurate digital impression should be used. In our study model that was the Lava COS with the high accuracy scanning protocol. PMID:22937030
[Design and accuracy analysis of upper slicing system of MSCT].
Jiang, Rongjian
2013-05-01
The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
NASA Astrophysics Data System (ADS)
Ahdika, Atina; Lusiyana, Novyan
2017-02-01
World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).
NASA Astrophysics Data System (ADS)
Nagarajan, K.; Shashidharan Nair, C. K.
2007-07-01
The channelled spectrum employing polarized light interference is a very convenient method for the study of dispersion of birefringence. However, while using this method, the absolute order of the polarized light interference fringes cannot be determined easily. Approximate methods are therefore used to estimate the order. One of the approximations is that the dispersion of birefringence across neighbouring integer order fringes is negligible. In this paper, we show how this approximation can cause errors. A modification is reported whereby the error in the determination of absolute fringe order can be reduced using fractional orders instead of integer orders. The theoretical background for this method supported with computer simulation is presented. An experimental arrangement implementing these modifications is described. This method uses a Constant Deviation Spectrometer (CDS) and a Soleil Babinet Compensator (SBC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley
2015-01-15
Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models
de Jesus, Karla; Ayala, Helon V. H.; de Jesus, Kelly; Coelho, Leandro dos S.; Medeiros, Alexandre I.A.; Abraldes, José A.; Vaz, Mário A.P.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo
2018-01-01
Abstract Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances. PMID:29599857
2017-01-01
Purpose/Background Shoulder proprioception is essential in the activities of daily living as well as in sports. Acute muscle fatigue is believed to cause a deterioration of proprioception, increasing the risk of injury. The purpose of this study was to evaluate if fatigue of the shoulder external rotators during eccentric versus concentric activity affects shoulder joint proprioception as determined by active reproduction of position. Study design Quasi-experimental trial. Methods Twenty-two healthy subjects with no recent history of shoulder pathology were randomly allocated to either a concentric or an eccentric exercise group for fatiguing the shoulder external rotators. Proprioception was assessed before and after the fatiguing protocol using an isokinetic dynamometer, by measuring active reproduction of position at 30 ° of shoulder external rotation, reported as absolute angular error. The fatiguing protocol consisted of sets of fifteen consecutive external rotator muscle contractions in either the concentric or eccentric action. The subjects were exercised until there was a 30% decline from the peak torque of the subjects’ maximal voluntary contraction over three consecutive muscle contractions. Results A one-way analysis of variance test revealed no statistical difference in absolute angular error (p > 0.05) between concentric and eccentric groups. Moreover, no statistical difference (p > 0.05) was found in absolute angular error between pre- and post-fatigue in either group. Conclusions Eccentric exercise does not seem to acutely affect shoulder proprioception to a larger extent than concentric exercise. Level of evidence 2b PMID:28515976
Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis
2017-03-01
A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer
Andrade-Delgado, Laura; Telich-Tarriba, Jose E.; Fuente-del-Campo, Antonio; Altamirano-Arcos, Carlos A.
2018-01-01
Summary: Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies. PMID:29464171
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Elevation correction factor for absolute pressure measurements
NASA Technical Reports Server (NTRS)
Panek, Joseph W.; Sorrells, Mark R.
1996-01-01
With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.
Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A
2018-01-01
Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.
Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph
2008-09-25
Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
NASA Astrophysics Data System (ADS)
Yang, L.; Wang, G.; Liu, H.
2017-12-01
Rising sea level has important direct impacts on coastal and island regions such as the Caribbean where the influence of sea-level rise is becoming more apparent. The Caribbean Sea is a semi-enclosed sea adjacent to the landmasses of South and Central America to the south and west, and the Greater Antilles and the Lesser Antilles separate it from the Atlantic Ocean to the north and east. The work focus on studying the relative and absolute sea-level changes by integrating tide gauge, GPS, and satellite altimetry datasets (1955-2016) within the Caribbean Sea. Further, the two main components of absolute sea-level change, ocean mass and steric sea-level changes, are respectively studied using GRACE, temperature, and salinity datasets (1955-2016). According to the analysis conducted, the sea-level change rates have considerable temporal and spatial variations, and estimates may be subject to the techniques used and observation periods. The average absolute sea-level rise rate is 1.8±0.3 mm/year for the period from 1955 to 2015 according to the integrated tide gauge and GPS observations; the average absolute sea-level rise rate is 3.5±0.6 mm/year for the period from 1993 to 2016 according to the satellite altimetry observations. This study shows that the absolute sea-level change budget in the Caribbean Sea is closed in the periods from 1955 to 2016, in which ocean mass change dominates the absolute sea-level rise. The absolute sea-level change budget is also closed in the periods from 2004 to 2016, in which steric sea-level rise dominates the absolute sea-level rise.
Climatological Modeling of Monthly Air Temperature and Precipitation in Egypt through GIS Techniques
NASA Astrophysics Data System (ADS)
El Kenawy, A.
2009-09-01
This paper describes a method for modeling and mapping four climatic variables (maximum temperature, minimum temperature, mean temperature and total precipitation) in Egypt using a multiple regression approach implemented in a GIS environment. In this model, a set of variables including latitude, longitude, elevation within a distance of 5, 10 and 15 km, slope, aspect, distance to the Mediterranean Sea, distance to the Red Sea, distance to the Nile, ratio between land and water masses within a radius of 5, 10, 15 km, the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), the Normalized Difference Temperature Index (NDTI) and reflectance are included as independent variables. These variables were integrated as raster layers in MiraMon software at a spatial resolution of 1 km. Climatic variables were considered as dependent variables and averaged from quality controlled and homogenized 39 series distributing across the entire country during the period of (1957-2006). For each climatic variable, digital and objective maps were finally obtained using the multiple regression coefficients at monthly, seasonal and annual timescale. The accuracy of these maps were assessed through cross-validation between predicted and observed values using a set of statistics including coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean bias Error (MBE) and D Willmott statistic. These maps are valuable in the sense of spatial resolution as well as the number of observatories involved in the current analysis.
NASA Astrophysics Data System (ADS)
Hu, Guojie; Wu, Xiaodong; Zhao, Lin; Li, Ren; Wu, Tonghua; Xie, Changwei; Pang, Qiangqiang; Cheng, Guodong
2017-08-01
Soil temperature plays a key role in hydro-thermal processes in environments and is a critical variable linking surface structure to soil processes. There is a need for more accurate temperature simulation models, particularly in Qinghai-Xizang (Tibet) Plateau (QXP). In this study, a model was developed for the simulation of hourly soil surface temperatures with air temperatures. The model incorporated the thermal properties of the soil, vegetation cover, solar radiation, and water flux density and utilized field data collected from Qinghai-Xizang (Tibet) Plateau (QXP). The model was used to simulate the thermal regime at soil depths of 5 cm, 10 cm and 20 cm and results were compared with those from previous models and with experimental measurements of ground temperature at two different locations. The analysis showed that the newly developed model provided better estimates of observed field temperatures, with an average mean absolute error (MAE), root mean square error (RMSE), and the normalized standard error (NSEE) of 1.17 °C, 1.30 °C and 13.84 %, 0.41 °C, 0.49 °C and 5.45 %, 0.13 °C, 0.18 °C and 2.23 % at 5 cm, 10 cm and 20 cm depths, respectively. These findings provide a useful reference for simulating soil temperature and may be incorporated into other ecosystem models requiring soil temperature as an input variable for modeling permafrost changes under global warming.
Accurate Heart Rate Monitoring During Physical Exercises Using PPG.
Temko, Andriy
2017-09-01
The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.
Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang
2016-09-21
An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.
Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng
2014-01-01
The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173∼1473 K and strain rate range of 0.01∼10 s−1. Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of −39.99%∼35.05% and −3.77%∼16.74%. As for the former, only 16.3% of the test data set possesses η-values within ±1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model. PMID:24688358
A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.
Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang
2014-07-31
In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.
Network Adjustment of Orbit Errors in SAR Interferometry
NASA Astrophysics Data System (ADS)
Bahr, Hermann; Hanssen, Ramon
2010-03-01
Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.
Salt-and-pepper noise removal using modified mean filter and total variation minimization
NASA Astrophysics Data System (ADS)
Aghajarian, Mickael; McInroy, John E.; Wright, Cameron H. G.
2018-01-01
The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department.
Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.
Wang, Jinke; Guo, Haoyan
2016-01-01
This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department
Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842
Feasibility of measuring dissolved carbon dioxide based on head space partial pressures
Watten, B.J.; Boyd, C.E.; Schwartz, M.F.; Summerfelt, S.T.; Brazil, B.L.
2004-01-01
We describe an instrument prototype that measures dissolved carbon dioxide (DC) without need for standard wetted probe membranes or titration. DC is calculated using Henry's Law, water temperature, and the steady-state partial pressure of carbon dioxide that develops within the instrument's vertical gas-liquid contacting chamber. Gas-phase partial pressures were determined with either an infrared detector (ID) or by measuring voltage developed by a pH electrode immersed in an isolated sodium carbonate solution (SC) sparged with recirculated head space gas. Calculated DC concentrations were compared with those obtained by titration over a range of DC (2, 4, 8, 12, 16, 20, 24, and 28mg/l), total alkalinity (35, 120, and 250mg/l as CaCO3), total dissolved gas pressure (-178 to 120 mmHg), and dissolved oxygen concentrations (7, 14, and 18 mg/l). Statistically significant (P < 0.001) correlations were established between head space (ID) and titrimetrically determined DC concentrations (R2 = 0.987-0.999, N = 96). Millivolt and titrimetric values from the SC solution tests were also correlated (P < 0.001, R 2 = 0.997, N = 16). The absolute and relative error associated with the use of the ID and SC solution averaged 0.9mg/l DC and 7.0% and 0.6 mg/l DC and 9.6%, respectively. The precision of DC estimates established in a second test series was good; coefficients of variation (100(SD/mean)) for the head space (ID) and titration analyses were 0.99% and 1.7%. Precision of the SC solution method was 1.3%. In a third test series, a single ID was coupled with four replicate head space units so as to permit sequential monitoring (15 min intervals) of a common water source. Here, appropriate gas samples were secured using a series of solenoid valves (1.6 mm bore) activated by a time-based controller. This system configuration reduced the capital cost per sample site from US$ 2695 to 876. Absolute error averaged 2.9, 3.1, 3.7, and 2.7 mg/ l for replicates 1-4 (N = 36) during a 21-day test period (DC range, 36-40 mg/l). The ID meter was then modified so as to provide for DO as well as DC measurements across components of an intensive fish production system. ?? 2003 Elsevier B.V. All rights reserved.
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas
NASA Astrophysics Data System (ADS)
Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.
2017-12-01
Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.
NASA Astrophysics Data System (ADS)
Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.
1996-05-01
A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.
Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples
NASA Technical Reports Server (NTRS)
Ratnatunga, Kavan U.; Casertano, Stefano
1991-01-01
A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.
Single-breath diffusing capacity for carbon monoxide instrument accuracy across 3 health systems.
Hegewald, Matthew J; Markewitz, Boaz A; Wilson, Emily L; Gallo, Heather M; Jensen, Robert L
2015-03-01
Measuring diffusing capacity of the lung for carbon monoxide (DLCO) is complex and associated with wide intra- and inter-laboratory variability. Increased D(LCO) variability may have important clinical consequences. The objective of the study was to assess instrument performance across hospital pulmonary function testing laboratories using a D(LCO) simulator that produces precise and repeatable D(LCO) values. D(LCO) instruments were tested with CO gas concentrations representing medium and high range D(LCO) values. The absolute difference between observed and target D(LCO) value was used to determine measurement accuracy; accuracy was defined as an average deviation from the target value of < 2.0 mL/min/mm Hg. Accuracy of inspired volume measurement and gas sensors were also determined. Twenty-three instruments were tested across 3 healthcare systems. The mean absolute deviation from the target value was 1.80 mL/min/mm Hg (range 0.24-4.23) with 10 of 23 instruments (43%) being inaccurate. High volume laboratories performed better than low volume laboratories, although the difference was not significant. There was no significant difference among the instruments by manufacturers. Inspired volume was not accurate in 48% of devices; mean absolute deviation from target value was 3.7%. Instrument gas analyzers performed adequately in all instruments. D(LCO) instrument accuracy was unacceptable in 43% of devices. Instrument inaccuracy can be primarily attributed to errors in inspired volume measurement and not gas analyzer performance. D(LCO) instrument performance may be improved by regular testing with a simulator. Caution should be used when comparing D(LCO) results reported from different laboratories. Copyright © 2015 by Daedalus Enterprises.
Simulated building energy demand biases resulting from the use of representative weather stations
Burleyson, Casey D.; Voisin, Nathalie; Taylor, Z. Todd; ...
2017-11-06
Numerical building models are typically forced with weather data from a limited number of “representative cities” or weather stations representing different climate regions. The use of representative weather stations reduces computational costs, but often fails to capture spatial heterogeneity in weather that may be important for simulations aimed at understanding how building stocks respond to a changing climate. Here, we quantify the potential reduction in temperature and load biases from using an increasing number of weather stations over the western U.S. Our novel approach is based on deriving temperature and load time series using incrementally more weather stations, ranging frommore » 8 to roughly 150, to evaluate the ability to capture weather patterns across different seasons. Using 8 stations across the western U.S., one from each IECC climate zone, results in an average absolute summertime temperature bias of ~4.0 °C with respect to a high-resolution gridded dataset. The mean absolute bias drops to ~1.5 °C using all available weather stations. Temperature biases of this magnitude could translate to absolute summertime mean simulated load biases as high as 13.5%. Increasing the size of the domain over which biases are calculated reduces their magnitude as positive and negative biases may cancel out. Using 8 representative weather stations can lead to a 20–40% bias of peak building loads during both summer and winter, a significant error for capacity expansion planners who may use these types of simulations. Using weather stations close to population centers reduces both mean and peak load biases. Our approach could be used by others designing aggregate building simulations to understand the sensitivity to their choice of weather stations used to drive the models.« less
Wearable Vector Electrical Bioimpedance System to Assess Knee Joint Health
Hersek, Sinan; Töreyin, Hakan; Teague, Caitlin N.; Millard-Stafford, Mindy L.; Jeong, Hyeon-Ki; Bavare, Miheer M.; Wolkoff, Paul; Sawka, Michael N.; Inan, Omer T.
2017-01-01
Objective We designed and validated a portable electrical bioimpedance (EBI) system to quantify knee joint health. Methods Five separate experiments were performed to demonstrate the: (1) ability of the EBI system to assess knee injury and recovery; (2) inter-day variability of knee EBI measurements; (3) sensitivity of the system to small changes in interstitial fluid volume; (4) reducing the error of EBI measurements using acceleration signals; (5) use of the system with dry electrodes integrated to a wearable knee wrap. Results (1) The absolute difference in resistance (R) and reactance (X) from the left to the right knee was able to distinguish injured and healthy knees (p<0.05); the absolute difference in R decreased significantly (p<0.05) in injured subjects following rehabilitation. (2) The average inter-day variability (standard deviation) of the absolute difference in knee R was 2.5Ω, and for X was, 1.2 Ω. (3) Local heating/cooling resulted in a significant decrease/increase in knee R (p<0.01). (4) The proposed subject position detection algorithm achieved 97.4% leave-one subject out cross-validated accuracy and 98.2% precision in detecting when the subject is in the correct position to take measurements. (5) Linear regression between the knee R and X measured using the wet electrodes and the designed wearable knee wrap were highly correlated (r2 = 0.8 and 0.9, respectively). Conclusion This work demonstrates the use of wearable EBI measurements in monitoring knee joint health. Significance The proposed wearable system has the potential for assessing knee joint health outside the clinic/lab and help guide rehabilitation. PMID:28026745
Wearable Vector Electrical Bioimpedance System to Assess Knee Joint Health.
Hersek, Sinan; Toreyin, Hakan; Teague, Caitlin N; Millard-Stafford, Mindy L; Jeong, Hyeon-Ki; Bavare, Miheer M; Wolkoff, Paul; Sawka, Michael N; Inan, Omer T
2017-10-01
We designed and validated a portable electrical bioimpedance (EBI) system to quantify knee joint health. Five separate experiments were performed to demonstrate the: 1) ability of the EBI system to assess knee injury and recovery; 2) interday variability of knee EBI measurements; 3) sensitivity of the system to small changes in interstitial fluid volume; 4) reducing the error of EBI measurements using acceleration signals; and 5) use of the system with dry electrodes integrated to a wearable knee wrap. 1) The absolute difference in resistance ( R) and reactance (X) from the left to the right knee was able to distinguish injured and healthy knees (p < 0.05); the absolute difference in R decreased significantly (p < 0.05) in injured subjects following rehabilitation. 2) The average interday variability (standard deviation) of the absolute difference in knee R was 2.5 Ω and for X was 1.2 Ω. 3) Local heating/cooling resulted in a significant decrease/increase in knee R (p < 0.01). 4) The proposed subject position detection algorithm achieved 97.4% leave-one subject out cross-validated accuracy and 98.2% precision in detecting when the subject is in the correct position to take measurements. 5) Linear regression between the knee R and X measured using the wet electrodes and the designed wearable knee wrap were highly correlated ( R 2 = 0.8 and 0.9, respectively). This study demonstrates the use of wearable EBI measurements in monitoring knee joint health. The proposed wearable system has the potential for assessing knee joint health outside the clinic/lab and help guide rehabilitation.
Simulated building energy demand biases resulting from the use of representative weather stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burleyson, Casey D.; Voisin, Nathalie; Taylor, Z. Todd
Numerical building models are typically forced with weather data from a limited number of “representative cities” or weather stations representing different climate regions. The use of representative weather stations reduces computational costs, but often fails to capture spatial heterogeneity in weather that may be important for simulations aimed at understanding how building stocks respond to a changing climate. Here, we quantify the potential reduction in temperature and load biases from using an increasing number of weather stations over the western U.S. Our novel approach is based on deriving temperature and load time series using incrementally more weather stations, ranging frommore » 8 to roughly 150, to evaluate the ability to capture weather patterns across different seasons. Using 8 stations across the western U.S., one from each IECC climate zone, results in an average absolute summertime temperature bias of ~4.0 °C with respect to a high-resolution gridded dataset. The mean absolute bias drops to ~1.5 °C using all available weather stations. Temperature biases of this magnitude could translate to absolute summertime mean simulated load biases as high as 13.5%. Increasing the size of the domain over which biases are calculated reduces their magnitude as positive and negative biases may cancel out. Using 8 representative weather stations can lead to a 20–40% bias of peak building loads during both summer and winter, a significant error for capacity expansion planners who may use these types of simulations. Using weather stations close to population centers reduces both mean and peak load biases. Our approach could be used by others designing aggregate building simulations to understand the sensitivity to their choice of weather stations used to drive the models.« less
Simulated building energy demand biases resulting from the use of representative weather stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burleyson, Casey D.; Voisin, Nathalie; Taylor, Z. Todd
Numerical building models are typically forced with weather data from a limited number of “representative cities” or weather stations representing different climate regions. The use of representative weather stations reduces computational costs, but often fails to capture spatial heterogeneity in weather that may be important for simulations aimed at understanding how building stocks respond to a changing climate. We quantify the potential reduction in bias from using an increasing number of weather stations over the western U.S. The approach is based on deriving temperature and load time series using incrementally more weather stations, ranging from 8 to roughly 150, tomore » capture weather across different seasons. Using 8 stations, one from each climate zone, across the western U.S. results in an average absolute summertime temperature bias of 7.2°F with respect to a spatially-resolved gridded dataset. The mean absolute bias drops to 2.8°F using all available weather stations. Temperature biases of this magnitude could translate to absolute summertime mean simulated load biases as high as 13.8%, a significant error for capacity expansion planners who may use these types of simulations. Increasing the size of the domain over which biases are calculated reduces their magnitude as positive and negative biases may cancel out. Using 8 representative weather stations can lead to a 20-40% overestimation of peak building loads during both summer and winter. Using weather stations close to population centers reduces both mean and peak load biases. This approach could be used by others designing aggregate building simulations to understand the sensitivity to their choice of weather stations used to drive the models.« less
Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude
2016-01-01
Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm(2) area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10(-1) to 4 × 10(-3) copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings.
Li, Zhao; Liu, Yong; Wei, Qingquan; Liu, Yuanjie; Liu, Wenwen; Zhang, Xuelian; Yu, Yude
2016-01-01
Absolute, precise quantification methods expand the scope of nucleic acids research and have many practical applications. Digital polymerase chain reaction (dPCR) is a powerful method for nucleic acid detection and absolute quantification. However, it requires thermal cycling and accurate temperature control, which are difficult in resource-limited conditions. Accordingly, isothermal methods, such as recombinase polymerase amplification (RPA), are more attractive. We developed a picoliter well array (PWA) chip with 27,000 consistently sized picoliter reactions (314 pL) for isothermal DNA quantification using digital RPA (dRPA) at 39°C. Sample loading using a scraping liquid blade was simple, fast, and required small reagent volumes (i.e., <20 μL). Passivating the chip surface using a methoxy-PEG-silane agent effectively eliminated cross-contamination during dRPA. Our creative optical design enabled wide-field fluorescence imaging in situ and both end-point and real-time analyses of picoliter wells in a 6-cm2 area. It was not necessary to use scan shooting and stitch serial small images together. Using this method, we quantified serial dilutions of a Listeria monocytogenes gDNA stock solution from 9 × 10-1 to 4 × 10-3 copies per well with an average error of less than 11% (N = 15). Overall dRPA-on-chip processing required less than 30 min, which was a 4-fold decrease compared to dPCR, requiring approximately 2 h. dRPA on the PWA chip provides a simple and highly sensitive method to quantify nucleic acids without thermal cycling or precise micropump/microvalve control. It has applications in fast field analysis and critical clinical diagnostics under resource-limited settings. PMID:27074005
Mental health care and average happiness: strong effect in developed nations.
Touburg, Giorgio; Veenhoven, Ruut
2015-07-01
Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.
Plant traits determine forest flammability
NASA Astrophysics Data System (ADS)
Zylstra, Philip; Bradstock, Ross
2016-04-01
Carbon and nutrient cycles in forest ecosystems are influenced by their inherent flammability - a property determined by the traits of the component plant species that form the fuel and influence the micro climate of a fire. In the absence of a model capable of explaining the complexity of such a system however, flammability is frequently represented by simple metrics such as surface fuel load. The implications of modelling fire - flammability feedbacks using surface fuel load were examined and compared to a biophysical, mechanistic model (Forest Flammability Model) that incorporates the influence of structural plant traits (e.g. crown shape and spacing) and leaf traits (e.g. thickness, dimensions and moisture). Fuels burn with values of combustibility modelled from leaf traits, transferring convective heat along vectors defined by flame angle and with plume temperatures that decrease with distance from the flame. Flames are re-calculated in one-second time-steps, with new leaves within the plant, neighbouring plants or higher strata ignited when the modelled time to ignition is reached, and other leaves extinguishing when their modelled flame duration is exceeded. The relative influence of surface fuels, vegetation structure and plant leaf traits were examined by comparing flame heights modelled using three treatments that successively added these components within the FFM. Validation was performed across a diverse range of eucalypt forests burnt under widely varying conditions during a forest fire in the Brindabella Ranges west of Canberra (ACT) in 2003. Flame heights ranged from 10 cm to more than 20 m, with an average of 4 m. When modelled from surface fuels alone, flame heights were on average 1.5m smaller than observed values, and were predicted within the error range 28% of the time. The addition of plant structure produced predicted flame heights that were on average 1.5m larger than observed, but were correct 53% of the time. The over-prediction in this case was the result of a small number of large errors, where higher strata such as forest canopy were modelled to ignite but did not. The addition of leaf traits largely addressed this error, so that the mean flame height over-prediction was reduced to 0.3m and the fully parameterised FFM gave correct predictions 62% of the time. When small (<1m) flames were excluded, the fully parameterised model correctly predicted flame heights 12 times more often than could be predicted using surface fuels alone, and the Mean Absolute Error was 4 times smaller. The inadequate consideration of plant traits within a mechanistic framework introduces significant error to forest fire behaviour modelling. The FFM provides a solution to this, and an avenue by which plant trait information can be used to better inform Global Vegetation Models and decision-making tools used to mitigate the impacts of fire.
Use of modern contraception by the poor is falling behind.
Gakidou, Emmanuela; Vayena, Effy
2007-02-01
The widespread increase in the use of contraception, due to multiple factors including improved access to modern contraception, is one of the most dramatic social transformations of the past fifty years. This study explores whether the global progress in the use of modern contraceptives has also benefited the poorest. Demographic and Health Surveys from 55 developing countries were analyzed using wealth indices that allow the identification of the absolute poor within each country. This article explores the macro level determinants of the differences in the use of modern contraceptives between the poor and the national averages of several countries. Despite increases in national averages, use of modern contraception by the absolute poor remains low. South and Southeast Asia have relatively high rates of modern contraception in the absolute poor, on average 17% higher than in Latin America. Over time the gaps in use persist and are increasing. Latin America exhibits significantly larger gaps in use between the poor and the averages, while gaps in sub-Saharan Africa are on average smaller by 15.8% and in Southeast Asia by 11.6%. The secular trend of increasing rates of modern contraceptive use has not resulted in a decrease of the gap in use for those living in absolute poverty. Countries with large economic inequalities also exhibit large inequalities in modern contraceptive use. In addition to macro level factors that influence contraceptive use, such as economic development and provision of reproductive health services, there are strong regional variations, with sub-Saharan Africa exhibiting the lowest national rates of use, South and Southeast Asia the highest use among the poor, and Latin America the largest inequalities in use.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
NASA Astrophysics Data System (ADS)
Rieger, G.; Pinnington, E. H.; Ciubotariu, C.
2000-12-01
Absolute photon emission cross sections following electron capture reactions have been measured for C2+, N3+, N4+ and O3+ ions colliding with Li(2s) atoms at keV energies. The results are compared with calculations using the extended classical over-the-barrier model by Niehaus. We explore the limits of our experimental method and present a detailed discussion of experimental errors.
[Research on Resistant Starch Content of Rice Grain Based on NIR Spectroscopy Model].
Luo, Xi; Wu, Fang-xi; Xie, Hong-guang; Zhu, Yong-sheng; Zhang, Jian-fu; Xie, Hua-an
2016-03-01
A new method based on near-infrared reflectance spectroscopy (NIRS) analysis was explored to determine the content of rice-resistant starch instead of common chemical method which took long time was high-cost. First of all, we collected 62 spectral data which have big differences in terms of resistant starch content of rice, and then the spectral data and detected chemical values are imported chemometrics software. After that a near-infrared spectroscopy calibration model for rice-resistant starch content was constructed with partial least squares (PLS) method. Results are as follows: In respect of internal cross validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+1thD, pretreatment with 1thD+SNV were 0.920 2, 0.967 0 and 0.976 7 respectively. Root mean square error of prediction (RMSEP) were 1.533 7, 1.011 2 and 0.837 1 respectively. In respect of external validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+ 1thD, pretreatment with 1thD+SNV were 0.805, 0.976 and 0.992 respectively. The average absolute error was 1.456, 0.818, 0.515 respectively. There was no significant difference between chemical and predicted values (Turkey multiple comparison), so we think near infrared spectrum analysis is more feasible than chemical measurement. Among the different pretreatment, the first derivation and standard normal variate (1thD+SNV) have higher coefficient of determination (R2) and lower error value whether in internal validation and external validation. In other words, the calibration model has higher precision and less error by pretreatment with 1thD+SNV.
Orphanidou, Christina
2017-02-01
A new method for extracting the respiratory rate from ECG and PPG obtained via wearable sensors is presented. The proposed technique employs Ensemble Empirical Mode Decomposition in order to identify the respiration "mode" from the noise-corrupted Heart Rate Variability/Pulse Rate Variability and Amplitude Modulation signals extracted from ECG and PPG signals. The technique was validated with respect to a Respiratory Impedance Pneumography (RIP) signal using the mean absolute and the average relative errors for a group ambulatory hospital patients. We compared approaches using single respiration-induced modulations on the ECG and PPG signals with approaches fusing the different modulations. Additionally, we investigated whether the presence of both the simultaneously recorded ECG and PPG signals provided a benefit in the overall system performance. Our method outperformed state-of-the-art ECG- and PPG-based algorithms and gave the best results over the whole database with a mean error of 1.8bpm for 1min estimates when using the fused ECG modulations, which was a relative error of 10.3%. No statistically significant differences were found when comparing the ECG-, PPG- and ECG/PPG-based approaches, indicating that the PPG can be used as a valid alternative to the ECG for applications using wearable sensors. While the presence of both the ECG and PPG signals did not provide an improvement in the estimation error, it increased the proportion of windows for which an estimate was obtained by at least 9%, indicating that the use of two simultaneously recorded signals might be desirable in high-acuity cases where an RR estimate is required more frequently. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using lean to improve medication administration safety: in search of the "perfect dose".
Ching, Joan M; Long, Christina; Williams, Barbara L; Blackmore, C Craig
2013-05-01
At Virginia Mason Medical Center (Seattle), the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study was used in combination with Lean quality improvement efforts to address medication administration safety. Lean interventions were targeted at improving the medication room layout, applying visual controls, and implementing nursing standard work. The interventions were designed to prevent medication administration errors through improving six safe practices: (1) comparing medication with medication administration record, (2) labeling medication, (3) checking two forms of patient identification, (4) explaining medication to patient, (5) charting medication immediately, and (6) protecting the process from distractions/interruptions. Trained nurse auditors observed 9,244 doses for 2,139 patients. Following the intervention, the number of safe-practice violations decreased from 83 violations/100 doses at baseline (January 2010-March 2010) to 42 violations/100 doses at final follow-up (July 2011-September 2011), resulting in an absolute risk reduction of 42 violations/100 doses (95% confidence interval [CI]: 35-48), p < .001). The number of medication administration errors decreased from 10.3 errors/100 doses at baseline to 2.8 errors/100 doses at final follow-up (absolute risk reduction: 7 violations/100 doses [95% CI: 5-10, p < .001]). The "perfect dose" score, reflecting compliance with all six safe practices and absence of any of the eight medication administration errors, improved from 37 in compliance/100 doses at baseline to 68 in compliance/100 doses at the final follow-up. Lean process improvements coupled with direct observation can contribute to substantial decreases in errors in nursing medication administration.
Furmanek, Mariusz P.; Słomka, Kajetan J.; Sobiesiak, Andrzej; Rzepko, Marian; Juras, Grzegorz
2018-01-01
Abstract The proprioceptive information received from mechanoreceptors is potentially responsible for controlling the joint position and force differentiation. However, it is unknown whether cryotherapy influences this complex mechanism. Previously reported results are not universally conclusive and sometimes even contradictory. The main objective of this study was to investigate the impact of local cryotherapy on knee joint position sense (JPS) and force production sense (FPS). The study group consisted of 55 healthy participants (age: 21 ± 2 years, body height: 171.2 ± 9 cm, body mass: 63.3 ± 12 kg, BMI: 21.5 ± 2.6). Local cooling was achieved with the use of gel-packs cooled to -2 ± 2.5°C and applied simultaneously over the knee joint and the quadriceps femoris muscle for 20 minutes. JPS and FPS were evaluated using the Biodex System 4 Pro apparatus. Repeated measures analysis of variance (ANOVA) did not show any statistically significant changes of the JPS and FPS under application of cryotherapy for all analyzed variables: the JPS’s absolute error (p = 0.976), its relative error (p = 0.295), and its variable error (p = 0.489); the FPS’s absolute error (p = 0.688), its relative error (p = 0.193), and its variable error (p = 0.123). The results indicate that local cooling does not affect proprioceptive acuity of the healthy knee joint. They also suggest that local limited cooling before physical activity at low velocity did not present health or injury risk in this particular study group. PMID:29599858
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.
Knols, Ruud H; Aufdemkampe, Geert; de Bruin, Eling D; Uebelhart, Daniel; Aaronson, Neil K
2009-01-01
Background Hand-held dynamometry is a portable and inexpensive method to quantify muscle strength. To determine if muscle strength has changed, an examiner must know what part of the difference between a patient's pre-treatment and post-treatment measurements is attributable to real change, and what part is due to measurement error. This study aimed to determine the relative and absolute reliability of intra and inter-observer strength measurements with a hand-held dynamometer (HHD). Methods Two observers performed maximum voluntary peak torque measurements (MVPT) for isometric knee extension in 24 patients with haematological malignancies. For each patient, the measurements were carried out on the same day. The main outcome measures were the intraclass correlation coefficient (ICC ± 95%CI), the standard error of measurement (SEM), the smallest detectable difference (SDD), the relative values as % of the grand mean of the SEM and SDD, and the limits of agreement for the intra- and inter-observer '3 repetition average' and the 'highest value of 3 MVPT' knee extension strength measures. Results The intra-observer ICCs were 0.94 for the average of 3 MVPT (95%CI: 0.86–0.97) and 0.86 for the highest value of 3 MVPT (95%CI: 0.71–0.94). The ICCs for the inter-observer measurements were 0.89 for the average of 3 MVPT (95%CI: 0.75–0.95) and 0.77 for the highest value of 3 MVPT (95%CI: 0.54–0.90). The SEMs for the intra-observer measurements were 6.22 Nm (3.98% of the grand mean (GM) and 9.83 Nm (5.88% of GM). For the inter-observer measurements, the SEMs were 9.65 Nm (6.65% of GM) and 11.41 Nm (6.73% of GM). The SDDs for the generated parameters varied from 17.23 Nm (11.04% of GM) to 27.26 Nm (17.09% of GM) for intra-observer measurements, and 26.76 Nm (16.77% of GM) to 31.62 Nm (18.66% of GM) for inter-observer measurements, with similar results for the limits of agreement. Conclusion The results indicate that there is acceptable relative reliability for evaluating knee strength with a HHD, while the measurement error observed was modest. The HHD may be useful in detecting changes in knee extension strength at the individual patient level. PMID:19272149
A simplified water temperature model for the Colorado River below Glen Canyon Dam
Wright, S.A.; Anderson, C.R.; Voichick, N.
2009-01-01
Glen Canyon Dam, located on the Colorado River in northern Arizona, has affected the physical, biological and cultural resources of the river downstream in Grand Canyon. One of the impacts to the downstream physical environment that has important implications for the aquatic ecosystem is the transformation of the thermal regime from highly variable seasonally to relatively constant year-round, owing to hypolimnetic releases from the upstream reservoir, Lake Powell. Because of the perceived impacts on the downstream aquatic ecosystem and native fish communities, the Glen Canyon Dam Adaptive Management Program has considered modifications to flow releases and release temperatures designed to increase downstream temperatures. Here, we present a new model of monthly average water temperatures below Glen Canyon Dam designed for first-order, relatively simple evaluation of various alternative dam operations. The model is based on a simplified heat-exchange equation, and model parameters are estimated empirically. The model predicts monthly average temperatures at locations up to 421 km downstream from the dam with average absolute errors less than 0.58C for the dataset considered. The modelling approach used here may also prove useful for other systems, particularly below large dams where release temperatures are substantially out of equilibrium with meteorological conditions. We also present some examples of how the model can be used to evaluate scenarios for the operation of Glen Canyon Dam.
Accuracy of measurement in electrically evoked compound action potentials.
Hey, Matthias; Müller-Deile, Joachim
2015-01-15
Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.
Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge
Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.
2016-01-01
Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749
Radiographic absorptiometry method in measurement of localized alveolar bone density changes.
Kuhl, E D; Nummikoski, P V
2000-03-01
The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.
Absolute frequency measurement of the 88Sr+ clock transition using a GPS link to the SI second
NASA Astrophysics Data System (ADS)
Dubé, Pierre; E Bernard, John; Gertsvolf, Marina
2017-06-01
We report the results of a recent measurement of the absolute frequency of the 5s{{ }2}{{S}1/2} - 4d{{ }2}{{D}5/2} transition of the {{}88}\\text{Sr}{{}+} ion. The optical frequency was measured against the international atomic time realization of the SI second on the geoid as obtained by frequency transfer using a global positioning system link and the precise point positioning technique. The measurement campaign yielded more than 100 h of frequency data. It was performed with improvements to the stability and accuracy of the single-ion clock compared to the last measurement made in 2012. The single ion clock uncertainty is evaluated at 1.5× {{10}-17} when contributions from acousto-optic modulator frequency chirps and servo errors are taken into account. The stability of the ion clock is 3× {{10}-15} at 1 s averaging, a factor of three better than in the previous measurement. The results from the two measurement campaigns are in good agreement. The uncertainty of the measurement, primarily from the link to the SI second, is 0.75 Hz (1.7× {{10}-15} ). The frequency measured for the S-D clock transition of {{}88}\\text{S}{{\\text{r}}+} is {ν0}= 444 779 044 095 485.27(75) Hz.
Matching methods evaluation framework for stereoscopic breast x-ray images.
Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric
2016-01-01
Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.
Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.
2013-01-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-05-12
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.
Rumei Zhang; Hao Liu; Jianda Han
2017-07-01
Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.
NASA Astrophysics Data System (ADS)
Natividad, Gina May R.; Cawiding, Olive R.; Addawe, Rizavel C.
2017-11-01
The increase in the merchandise exports of the country offers information about the Philippines' trading role within the global economy. Merchandise exports statistics are used to monitor the country's overall production that is consumed overseas. This paper investigates the comparison between two models obtained by a) clustering the commodity groups into two based on its proportional contribution to the total exports, and b) treating only the total exports. Different seasonal autoregressive integrated moving average (SARIMA) models were then developed for the clustered commodities and for the total exports based on the monthly merchandise exports of the Philippines from 2011 to 2016. The data set used in this study was retrieved from the Philippine Statistics Authority (PSA) which is the central statistical authority in the country responsible for primary data collection. A test for significance of the difference between means at 0.05 level of significance was then performed on the forecasts produced. The result indicates that there is a significant difference between the mean of the forecasts of the two models. Moreover, upon a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the models, it was found that the models used for the clustered groups outperform the model for the total exports.
Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won
2015-11-13
The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test.
Nimbus 7 solar backscatter ultraviolet (SBUV) ozone products user's guide
NASA Technical Reports Server (NTRS)
Fleig, Albert J.; Mcpeters, R. D.; Bhartia, P. K.; Schlesinger, Barry M.; Cebula, Richard P.; Klenk, K. F.; Taylor, Steven L.; Heath, Donald F.
1990-01-01
Three ozone tape products from the Solar Backscatter Ultraviolet (SBUV) experiment aboard Nimbus 7 were archived at the National Space Science Data Center. The experiment measures the fraction of incoming radiation backscattered by the Earth's atmosphere at 12 wavelengths. In-flight measurements were used to monitor changes in the instrument sensitivity. Total column ozone is derived by comparing the measurements with calculations of what would be measured for different total ozone amounts. The altitude distribution is retrieved using an optimum statistical technique for the inversion. The estimated initial error in the absolute scale for total ozone is 2 percent, with a 3 percent drift over 8 years. The profile error depends on latitude and height, smallest at 3 to 10 mbar; the drift increases with increasing altitude. Three tape products are described. The High Density SBUV (HDSBUV) tape contains the final derived products - the total ozone and the vertical ozone profile - as well as much detailed diagnostic information generated during the retrieval process. The Compressed Ozone (CPOZ) tape contains only that subset of HDSBUV information, including total ozone and ozone profiles, considered most useful for scientific studies. The Zonal Means Tape (ZMT) contains daily, weekly, monthly and quarterly averages of the derived quantities over 10 deg latitude zones.
Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won
2015-01-01
The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test. PMID:26580622
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Measuring radio-signal power accurately
NASA Technical Reports Server (NTRS)
Goldstein, R. M.; Newton, J. W.; Winkelstein, R. A.
1979-01-01
Absolute value of signal power in weak radio signals is determined by computer-aided measurements. Equipment operates by averaging received signal over several-minute period and comparing average value with noise level of receiver previously calibrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lilie L., E-mail: lin@uphs.upenn.edu; Hertan, Lauren; Rengan, Ramesh
2012-06-01
Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed.more » To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.« less
Wang, Guochao; Tan, Lilong; Yan, Shuhua
2018-02-07
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.
Tan, Lilong; Yan, Shuhua
2018-01-01
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897
Forecasting in foodservice: model development, testing, and evaluation.
Miller, J L; Thompson, P A; Orabella, M M
1991-05-01
This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.
How is the weather? Forecasting inpatient glycemic control
Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M
2017-01-01
Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125
Verifying Safeguards Declarations with INDEPTH: A Sensitivity Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grogan, Brandon R; Richards, Scott
2017-01-01
A series of ORIGEN calculations were used to simulate the irradiation and decay of a number of spent fuel assemblies. These simulations focused on variations in the irradiation history that achieved the same terminal burnup through a different set of cycle histories. Simulated NDA measurements were generated for each test case from the ORIGEN data. These simulated measurement types included relative gammas, absolute gammas, absolute gammas plus neutrons, and concentrations of a set of six isotopes commonly measured by NDA. The INDEPTH code was used to reconstruct the initial enrichment, cooling time, and burnup for each irradiation using each simulatedmore » measurement type. The results were then compared to the initial ORIGEN inputs to quantify the size of the errors induced by the variations in cycle histories. Errors were compared based on the underlying changes to the cycle history, as well as the data types used for the reconstructions.« less
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
Error analysis of multi-needle Langmuir probe measurement technique.
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
NASA Astrophysics Data System (ADS)
Rasim; Junaeti, E.; Wirantika, R.
2018-01-01
Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.
Error analysis of multi-needle Langmuir probe measurement technique
NASA Astrophysics Data System (ADS)
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.