Sample records for maximum absolute error

  1. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  2. Proprioceptive deficit in individuals with unilateral tearing of the anterior cruciate ligament after active evaluation of the sense of joint position.

    PubMed

    Cossich, Victor; Mallrich, Frédéric; Titonelli, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio

    2014-01-01

    To ascertain whether the proprioceptive deficit in the sense of joint position continues to be present when patients with a limb presenting a deficient anterior cruciate ligament (ACL) are assessed by testing their active reproduction of joint position, in comparison with the contralateral limb. Twenty patients with unilateral ACL tearing participated in the study. Their active reproduction of joint position in the limb with the deficient ACL and in the healthy contralateral limb was tested. Meta-positions of 20% and 50% of the maximum joint range of motion were used. Proprioceptive performance was determined through the values of the absolute error, variable error and constant error. Significant differences in absolute error were found at both of the positions evaluated, and in constant error at 50% of the maximum joint range of motion. When evaluated in terms of absolute error, the proprioceptive deficit continues to be present even when an active evaluation of the sense of joint position is made. Consequently, this sense involves activity of both intramuscular and tendon receptors.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong

    Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less

  4. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    PubMed

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  5. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    PubMed Central

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-01-01

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412

  6. Does the Length of Elbow Flexors and Visual Feedback Have Effect on Accuracy of Isometric Muscle Contraction in Men after Stroke?

    PubMed Central

    Juodzbaliene, Vilma; Darbutas, Tomas; Skurvydas, Albertas

    2016-01-01

    The aim of the study was to determine the effect of different muscle length and visual feedback information (VFI) on accuracy of isometric contraction of elbow flexors in men after an ischemic stroke (IS). Materials and Methods. Maximum voluntary muscle contraction force (MVMCF) and accurate determinate muscle force (20% of MVMCF) developed during an isometric contraction of elbow flexors in 90° and 60° of elbow flexion were measured by an isokinetic dynamometer in healthy subjects (MH, n = 20) and subjects after an IS during their postrehabilitation period (MS, n = 20). Results. In order to evaluate the accuracy of the isometric contraction of the elbow flexors absolute errors were calculated. The absolute errors provided information about the difference between determinate and achieved muscle force. Conclusions. There is a tendency that greater absolute errors generating determinate force are made by MH and MS subjects in case of a greater elbow flexors length despite presence of VFI. Absolute errors also increase in both groups in case of a greater elbow flexors length without VFI. MS subjects make greater absolute errors generating determinate force without VFI in comparison with MH in shorter elbow flexors length. PMID:27042670

  7. Proprioceptive deficit in patients with complete tearing of the anterior cruciate ligament.

    PubMed

    Godinho, Pedro; Nicoliche, Eduardo; Cossich, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio

    2014-01-01

    To investigate the existence of proprioceptive deficits between the injured limb and the uninjured (i.e. contralateral normal) limb, in individuals who suffered complete tearing of the anterior cruciate ligament (ACL), using a strength reproduction test. Sixteen patients with complete tearing of the ACL participated in the study. A voluntary maximum isometric strength test was performed, with reproduction of the muscle strength in the limb with complete tearing of the ACL and the healthy contralateral limb, with the knee flexed at 60°. The meta-intensity was used for the procedure of 20% of the voluntary maximum isometric strength. The proprioceptive performance was determined by means of absolute error, variable error and constant error values. Significant differences were found between the control group and ACL group for the variables of absolute error (p = 0.05) and constant error (p = 0.01). No difference was found in relation to variable error (p = 0.83). Our data corroborate the hypothesis that there is a proprioceptive deficit in subjects with complete tearing of the ACL in an injured limb, in comparison with the uninjured limb, during evaluation of the sense of strength. This deficit can be explained in terms of partial or total loss of the mechanoreceptors of the ACL.

  8. Optimal quantum error correcting codes from absolutely maximally entangled states

    NASA Astrophysics Data System (ADS)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  9. Noise-Enhanced Eversion Force Sense in Ankles With or Without Functional Instability.

    PubMed

    Ross, Scott E; Linens, Shelley W; Wright, Cynthia J; Arnold, Brent L

    2015-08-01

    Force sense impairments are associated with functional ankle instability. Stochastic resonance stimulation (SRS) may have implications for correcting these force sense deficits. To determine if SRS improved force sense. Case-control study. Research laboratory. Twelve people with functional ankle instability (age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) and 12 people with stable ankles (age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg). The eversion force sense protocol required participants to reproduce a targeted muscle tension (10% of maximum voluntary isometric contraction). This protocol was assessed under SRSon and SRSoff (control) conditions. During SRSon, random subsensory mechanical noise was applied to the lower leg at a customized optimal intensity for each participant. Constant error, absolute error, and variable error measures quantified accuracy, overall performance, and consistency of force reproduction, respectively. With SRS, we observed main effects for force sense absolute error (SRSoff = 1.01 ± 0.67 N, SRSon = 0.69 ± 0.42 N) and variable error (SRSoff = 1.11 ± 0.64 N, SRSon = 0.78 ± 0.56 N) (P < .05). No other main effects or treatment-by-group interactions were found (P > .05). Although SRS reduced the overall magnitude (absolute error) and variability (variable error) of force sense errors, it had no effect on the directionality (constant error). Clinically, SRS may enhance muscle tension ability, which could have treatment implications for ankle stability.

  10. Clinical implementation and error sensitivity of a 3D quality assurance protocol for prostate and thoracic IMRT

    PubMed Central

    Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed

    2015-01-01

    This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299

  11. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  12. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  13. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  14. High-resolution spatial databases of monthly climate variables (1961-2010) over a complex terrain region in southwestern China

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Xu, An-Ding; Liu, Hong-Bin

    2015-01-01

    Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.

  15. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  16. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    NASA Astrophysics Data System (ADS)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  17. A soft-computing methodology for noninvasive time-spatial temperature estimation.

    PubMed

    Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A

    2008-02-01

    The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.

  18. A Simple Model Predicting Individual Weight Change in Humans

    PubMed Central

    Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.

    2010-01-01

    Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319

  19. Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality

    USGS Publications Warehouse

    Gaeuman, David; Jacobson, Robert B.

    2005-01-01

    When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.

  20. Type Ia supernovae as standard candles

    NASA Technical Reports Server (NTRS)

    Branch, David; Miller, Douglas L.

    1993-01-01

    The distribution of absolute blue magnitudes among Type Ia supernovae (SNs Ia) is studied. Supernovae were used with well determined apparent magnitudes at maximum light and parent galaxies with relative distances determined by the Tully-Fisher or Dn - sigma techniques. The mean absolute blue magnitude is given and the observational dispersion is only sigma(MB) 0.36, comparable to the expected combined errors in distance, apparent magnitude, and extinction. The mean (B-V) color at maximum light is 0.03 +/- 0.04, with a dispersion sigma(B-V) = 0.20. The Cepheid-based distance to IC 4182, the parent galaxy of the normal and unextinguished Type Ia SN 1937C, leads to a Hubble constant of H(0) + 51 +/- 12 km/s Mpc. The existence of a few SNs Ia that appear to have been reddened and dimmed by dust in their parent galaxies does not seriously compromise the use of SNs Ia as distance indicators.

  1. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  2. Impact of spot charge inaccuracies in IMPT treatments.

    PubMed

    Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M

    2017-08-01

    Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.

  3. Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida

    USGS Publications Warehouse

    Turner, J.F.

    1979-01-01

    A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)

  4. [Effect of stock abundance and environmental factors on the recruitment success of small yellow croaker in the East China Sea].

    PubMed

    Liu, Zun-lei; Yuan, Xing-wei; Yang, Lin-lin; Yan, Li-ping; Zhang, Hui; Cheng, Jia-hua

    2015-02-01

    Multiple hypotheses are available to explain recruitment rate. Model selection methods can be used to identify the best model that supports a particular hypothesis. However, using a single model for estimating recruitment success is often inadequate for overexploited population because of high model uncertainty. In this study, stock-recruitment data of small yellow croaker in the East China Sea collected from fishery dependent and independent surveys between 1992 and 2012 were used to examine density-dependent effects on recruitment success. Model selection methods based on frequentist (AIC, maximum adjusted R2 and P-values) and Bayesian (Bayesian model averaging, BMA) methods were applied to identify the relationship between recruitment and environment conditions. Interannual variability of the East China Sea environment was indicated by sea surface temperature ( SST) , meridional wind stress (MWS), zonal wind stress (ZWS), sea surface pressure (SPP) and runoff of Changjiang River ( RCR). Mean absolute error, mean squared predictive error and continuous ranked probability score were calculated to evaluate the predictive performance of recruitment success. The results showed that models structures were not consistent based on three kinds of model selection methods, predictive variables of models were spawning abundance and MWS by AIC, spawning abundance by P-values, spawning abundance, MWS and RCR by maximum adjusted R2. The recruitment success decreased linearly with stock abundance (P < 0.01), suggesting overcompensation effect in the recruitment success might be due to cannibalism or food competition. Meridional wind intensity showed marginally significant and positive effects on the recruitment success (P = 0.06), while runoff of Changjiang River showed a marginally negative effect (P = 0.07). Based on mean absolute error and continuous ranked probability score, predictive error associated with models obtained from BMA was the smallest amongst different approaches, while that from models selected based on the P-value of the independent variables was the highest. However, mean squared predictive error from models selected based on the maximum adjusted R2 was highest. We found that BMA method could improve the prediction of recruitment success, derive more accurate prediction interval and quantitatively evaluate model uncertainty.

  5. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  6. Detecting and quantifying stellar magnetic fields. Sparse Stokes profile approximation using orthogonal matching pursuit

    NASA Astrophysics Data System (ADS)

    Carroll, T. A.; Strassmeier, K. G.

    2014-03-01

    Context. In recent years, we have seen a rapidly growing number of stellar magnetic field detections for various types of stars. Many of these magnetic fields are estimated from spectropolarimetric observations (Stokes V) by using the so-called center-of-gravity (COG) method. Unfortunately, the accuracy of this method rapidly deteriorates with increasing noise and thus calls for a more robust procedure that combines signal detection and field estimation. Aims: We introduce an estimation method that provides not only the effective or mean longitudinal magnetic field from an observed Stokes V profile but also uses the net absolute polarization of the profile to obtain an estimate of the apparent (i.e., velocity resolved) absolute longitudinal magnetic field. Methods: By combining the COG method with an orthogonal-matching-pursuit (OMP) approach, we were able to decompose observed Stokes profiles with an overcomplete dictionary of wavelet-basis functions to reliably reconstruct the observed Stokes profiles in the presence of noise. The elementary wave functions of the sparse reconstruction process were utilized to estimate the effective longitudinal magnetic field and the apparent absolute longitudinal magnetic field. A multiresolution analysis complements the OMP algorithm to provide a robust detection and estimation method. Results: An extensive Monte-Carlo simulation confirms the reliability and accuracy of the magnetic OMP approach where a mean error of under 2% is found. Its full potential is obtained for heavily noise-corrupted Stokes profiles with signal-to-noise variance ratios down to unity. In this case a conventional COG method yields a mean error for the effective longitudinal magnetic field of up to 50%, whereas the OMP method gives a maximum error of 18%. It is, moreover, shown that even in the case of very small residual noise on a level between 10-3 and 10-5, a regime reached by current multiline reconstruction techniques, the conventional COG method incorrectly interprets a large portion of the residual noise as a magnetic field, with values of up to 100 G. The magnetic OMP method, on the other hand, remains largely unaffected by the noise, regardless of the noise level the maximum error is no greater than 0.7 G.

  7. Smartphone-based photoplethysmographic imaging for heart rate monitoring.

    PubMed

    Alafeef, Maha

    2017-07-01

    The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.

  8. Estimation of the Total Atmospheric Water Vapor Content and Land Surface Temperature Based on AATSR Thermal Data

    PubMed Central

    Zhang, Tangtang; Wen, Jun; van der Velde, Rogier; Meng, Xianhong; Li, Zhenchao; Liu, Yuanyong; Liu, Rong

    2008-01-01

    The total atmospheric water vapor content (TAWV) and land surface temperature (LST) play important roles in meteorology, hydrology, ecology and some other disciplines. In this paper, the ENVISAT/AATSR (The Advanced Along-Track Scanning Radiometer) thermal data are used to estimate the TAWV and LST over the Loess Plateau in China by using a practical split window algorithm. The distribution of the TAWV is accord with that of the MODIS TAWV products, which indicates that the estimation of the total atmospheric water vapor content is reliable. Validations of the LST by comparing with the ground measurements indicate that the maximum absolute derivation, the maximum relative error and the average relative error is 4.0K, 11.8% and 5.0% respectively, which shows that the retrievals are believable; this algorithm can provide a new way to estimate the LST from AATSR data. PMID:27879795

  9. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  10. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  11. Forecast models for suicide: Time-series analysis with data from Italy.

    PubMed

    Preti, Antonio; Lentini, Gianluca

    2016-01-01

    The prediction of suicidal behavior is a complex task. To fine-tune targeted preventative interventions, predictive analytics (i.e. forecasting future risk of suicide) is more important than exploratory data analysis (pattern recognition, e.g. detection of seasonality in suicide time series). This study sets out to investigate the accuracy of forecasting models of suicide for men and women. A total of 101 499 male suicides and of 39 681 female suicides - occurred in Italy from 1969 to 2003 - were investigated. In order to apply the forecasting model and test its accuracy, the time series were split into a training set (1969 to 1996; 336 months) and a test set (1997 to 2003; 84 months). The main outcome was the accuracy of forecasting models on the monthly number of suicides. These measures of accuracy were used: mean absolute error; root mean squared error; mean absolute percentage error; mean absolute scaled error. In both male and female suicides a change in the trend pattern was observed, with an increase from 1969 onwards to reach a maximum around 1990 and decrease thereafter. The variances attributable to the seasonal and trend components were, respectively, 24% and 64% in male suicides, and 28% and 41% in female ones. Both annual and seasonal historical trends of monthly data contributed to forecast future trends of suicide with a margin of error around 10%. The finding is clearer in male than in female time series of suicide. The main conclusion of the study is that models taking seasonality into account seem to be able to derive information on deviation from the mean when this occurs as a zenith, but they fail to reproduce it when it occurs as a nadir. Preventative efforts should concentrate on the factors that influence the occurrence of increases above the main trend in both seasonal and cyclic patterns of suicides.

  12. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  13. Development of Bio-impedance Analyzer (BIA) for Body Fat Calculation

    NASA Astrophysics Data System (ADS)

    Riyadi, Munawar A.; Nugraha, A.; Santoso, M. B.; Septaditya, D.; Prakoso, T.

    2017-04-01

    Common weight scales cannot assess body composition or determine fat mass and fat-fress mass that make up the body weight. This research propose bio-impedance analysis (BIA) tool capable to body composition assessment. This tool uses four electrodes, two of which are used for 50 kHz sine wave current flow to the body and the rest are used to measure the voltage produced by the body for impedance analysis. Parameters such as height, weight, age, and gender are provided individually. These parameters together with impedance measurements are then in the process to produce a body fat percentage. The experimental result shows impressive repeatability for successive measurements (stdev ≤ 0.25% fat mass). Moreover, result on the hand to hand node scheme reveals average absolute difference of total subjects between two analyzer tools of 0.48% (fat mass) with maximum absolute discrepancy of 1.22% (fat mass). On the other hand, the relative error normalized to Omron’s HBF-306 as comparison tool reveals less than 2% relative error. As a result, the system performance offers good evaluation tool for fat mass in the body.

  14. Phytoremediation of palm oil mill secondary effluent (POMSE) by Chrysopogon zizanioides (L.) using artificial neural networks.

    PubMed

    Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin

    2017-05-04

    Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.

  15. Spatial and temporal variability of the overall error of National Atmospheric Deposition Program measurements determined by the USGS collocated-sampler program, water years 1989-2001

    USGS Publications Warehouse

    Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.

    2005-01-01

    Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.

  16. The Infrared Hubble Diagram of Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    Photometry of Type Ia supernovae reveals that these objects are standardizable candles in optical passbands - the peak luminosities are related to the rate of decline after maximum light. In the near-infrared bands, there is essentially a characteristic brightness at maximum light for each photometric band. Thus, in the near-infrared they are better than standardizable candles; they are essentially standard candles. Their absolute magnitudes are known to ±0.15 magnitude or better. The infrared observations have the extra advantage that interstellar extinction by dust along the line of sight is a factor of 3-10 smaller than in the optical B- and V -bands. The size of any systematic errors in the infrared extinction corrections typically become smaller than the photometric errors of the observations. Thus, we can obtain distances to the hosts of Type Ia supernovae to ±8 % or better. This is particularly useful for extragalactic astronomy and precise measurements of the dark energy component of the universe.

  17. An affordable cuff-less blood pressure estimation solution.

    PubMed

    Jain, Monika; Kumar, Niranjan; Deb, Sujay

    2016-08-01

    This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE <; 5 mmHg and ESD <; 8mmHg. The results are validated using two-tailed dependent sample t-test also. The proposed device is a portable low-cost home and clinic bases solution for continuous health monitoring.

  18. A Gridded Daily Min/Max Temperature Dataset With 0.1° Resolution for the Yangtze River Valley and its Error Estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Qiufen; Hu, Jianglin

    2013-05-01

    The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.

  19. Spectral contaminant identifier for off-axis integrated cavity output spectroscopy measurements of liquid water isotopes

    NASA Astrophysics Data System (ADS)

    Brian Leen, J.; Berman, Elena S. F.; Liebson, Lindsay; Gupta, Manish

    2012-04-01

    Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to δ2H and δ18O measurement errors (Δδ2H and Δδ18O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offset of the spectra is used to calculate a broadband spectral metric, mBB, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, mNB. These metrics are used to correct for Δδ2H and Δδ18O. The method was tested on 14 instruments and Δδ18O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while Δδ2H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with mNB. Using the isotope error versus mNB and mBB curves, Δδ2H and Δδ18O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 ‰ and 0.25 ‰ respectively, while Δδ2H and Δδ18O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 ‰ and 0.22 ‰. Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.

  20. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  2. Adverse Climatic Conditions and Impact on Construction Scheduling and Cost

    DTIC Science & Technology

    1988-01-01

    ABBREVIATIONS ABS MAX MAX TEMP ...... Absolute maximum maximum temperature ABS MIN MIN TEMP ...... Absolute minimum minimum temperature BTU...o Degrees Farenheit MEAN MAX TEMP o.................... Mean maximum temperature MEAN MIN TEMP...temperatures available, a determination had to be made as to whether forecasts were based on absolute , mean, or statistically derived temperatures

  3. Absolute angular encoder based on optical diffraction

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Zhou, Tingting; Yuan, Bo; Wang, Liqiang

    2015-08-01

    A new encoding method for absolute angular encoder based on optical diffraction was proposed in the present study. In this method, an encoder disc is specially designed that a series of elements are uniformly spaced in one circle and each element is consisted of four diffraction gratings, which are tilted in the directions of 30°, 60°, -60° and -30°, respectively. The disc is illuminated by a coherent light and the diffractive signals are received. The positions of diffractive spots are used for absolute encoding and their intensities are for subdivision, which is different from the traditional optical encoder based on transparent/opaque binary principle. Since the track's width in the disc is not limited in the diffraction pattern, it provides a new way to solve the contradiction between the size and resolution, which is good for minimization of encoder. According to the proposed principle, the diffraction pattern disc with a diameter of 40 mm was made by lithography in the glass substrate. A prototype of absolute angular encoder with a resolution of 20" was built up. Its maximum error was tested as 78" by comparing with a small angle measuring system based on laser beam deflection.

  4. Mapping health outcome measures from a stroke registry to EQ-5D weights.

    PubMed

    Ghatnekar, Ola; Eriksson, Marie; Glader, Eva-Lotta

    2013-03-07

    To map health outcome related variables from a national register, not part of any validated instrument, with EQ-5D weights among stroke patients. We used two cross-sectional data sets including patient characteristics, outcome variables and EQ-5D weights from the national Swedish stroke register. Three regression techniques were used on the estimation set (n=272): ordinary least squares (OLS), Tobit, and censored least absolute deviation (CLAD). The regression coefficients for "dressing", "toileting", "mobility", "mood", "general health" and "proxy-responders" were applied to the validation set (n=272), and the performance was analysed with mean absolute error (MAE) and mean square error (MSE). The number of statistically significant coefficients varied by model, but all models generated consistent coefficients in terms of sign. Mean utility was underestimated in all models (least in OLS) and with lower variation (least in OLS) compared to the observed. The maximum attainable EQ-5D weight ranged from 0.90 (OLS) to 1.00 (Tobit and CLAD). Health states with utility weights <0.5 had greater errors than those with weights ≥ 0.5 (P<0.01). This study indicates that it is possible to map non-validated health outcome measures from a stroke register into preference-based utilities to study the development of stroke care over time, and to compare with other conditions in terms of utility.

  5. Predicting Near Edge X-ray Absorption Spectra with the Spin-Free Exact-Two-Component Hamiltonian and Orthogonality Constrained Density Functional Theory.

    PubMed

    Verma, Prakash; Derricotte, Wallace D; Evangelista, Francesco A

    2016-01-12

    Orthogonality constrained density functional theory (OCDFT) provides near-edge X-ray absorption (NEXAS) spectra of first-row elements within one electronvolt from experimental values. However, with increasing atomic number, scalar relativistic effects become the dominant source of error in a nonrelativistic OCDFT treatment of core-valence excitations. In this work we report a novel implementation of the spin-free exact-two-component (X2C) one-electron treatment of scalar relativistic effects and its combination with a recently developed OCDFT approach to compute a manifold of core-valence excited states. The inclusion of scalar relativistic effects in OCDFT reduces the mean absolute error of second-row elements core-valence excitations from 10.3 to 2.3 eV. For all the excitations considered, the results from X2C calculations are also found to be in excellent agreement with those from low-order spin-free Douglas-Kroll-Hess relativistic Hamiltonians. The X2C-OCDFT NEXAS spectra of three organotitanium complexes (TiCl4, TiCpCl3, TiCp2Cl2) are in very good agreement with unshifted experimental results and show a maximum absolute error of 5-6 eV. In addition, a decomposition of the total transition dipole moment into partial atomic contributions is proposed and applied to analyze the nature of the Ti pre-edge transitions in the three organotitanium complexes.

  6. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  7. Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry.

    PubMed

    Li, Beiwen; Liu, Ziping; Zhang, Song

    2016-10-03

    We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.

  8. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  9. Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles

    ERIC Educational Resources Information Center

    Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner

    2016-01-01

    This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…

  10. A Location Method Using Sensor Arrays for Continuous Gas Leakage in Integrally Stiffened Plates Based on the Acoustic Characteristics of the Stiffener

    PubMed Central

    Bian, Xu; Li, Yibo; Feng, Hao; Wang, Jiaqiang; Qi, Lei; Jin, Shijiu

    2015-01-01

    This paper proposes a continuous leakage location method based on the ultrasonic array sensor, which is specific to continuous gas leakage in a pressure container with an integral stiffener. This method collects the ultrasonic signals generated from the leakage hole through the piezoelectric ultrasonic sensor array, and analyzes the space-time correlation of every collected signal in the array. Meanwhile, it combines with the method of frequency compensation and superposition in time domain (SITD), based on the acoustic characteristics of the stiffener, to obtain a high-accuracy location result on the stiffener wall. According to the experimental results, the method successfully solves the orientation problem concerning continuous ultrasonic signals generated from leakage sources, and acquires high accuracy location information on the leakage source using a combination of multiple sets of orienting results. The mean value of location absolute error is 13.51 mm on the one-square-meter plate with an integral stiffener (4 mm width; 20 mm height; 197 mm spacing), and the maximum location absolute error is generally within a ±25 mm interval. PMID:26404316

  11. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  12. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Zongtang; Both, Johan; Li, Shenggang

    The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T)more » method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.« less

  14. Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data

    NASA Astrophysics Data System (ADS)

    Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim

    2018-05-01

    The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.

  15. Times of Maximum Light: The Passband Dependence

    NASA Astrophysics Data System (ADS)

    Joner, M. D.; Laney, C. D.

    2004-05-01

    We present UBVRIJHK light curves for the dwarf Cepheid variable star AD Canis Minoris. These data are part of a larger project to determine absolute magnitudes for this class of stars. Our figures clearly show changes in the times of maximum light, the amplitude, and the light curve morphology that are dependent on the passband used in the observation. Note that when data from a variety of passbands are used in studies that require a period analysis or that search for small changes in the pulsational period, it is easy to introduce significant systematic errors into the results. We thank the Brigham Young University Department of Physics and Astronmy for continued support of our research. We also acknowledge the South African Astronomical Observatory for time granted to this project.

  16. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    PubMed Central

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886

  17. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  18. Fast and accurate read-out of interferometric optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Bartholsen, Ingebrigt; Hjelme, Dag R.

    2016-03-01

    We present results from an evaluation of phase and frequency estimation algorithms for read-out instrumentation of interferometric sensors. Tests on interrogating a micro Fabry-Perot sensor made of semi-spherical stimuli-responsive hydrogel immobilized on a single mode fiber end face, shows that an iterative quadrature demodulation technique (IQDT) implemented on a 32-bit microcontroller unit can achieve an absolute length accuracy of ±50 nm and length change accuracy of ±3 nm using an 80 nm SLED source and a grating spectrometer for interrogation. The mean absolute error for the frequency estimator is a factor 3 larger than the theoretical lower bound for a maximum likelihood estimator. The corresponding factor for the phase estimator is 1.3. The computation time for the IQDT algorithm is reduced by a factor 1000 compared to the full QDT for the same accuracy requirement.

  19. Spectral contaminant identifier for off-axis integrated cavity output spectroscopy measurements of liquid water isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Leen, J.; Berman, Elena S. F.; Gupta, Manish

    Developments in cavity-enhanced absorption spectrometry have made it possible to measure water isotopes using faster, more cost-effective field-deployable instrumentation. Several groups have attempted to extend this technology to measure water extracted from plants and found that other extracted organics absorb light at frequencies similar to that absorbed by the water isotopomers, leading to {delta}{sup 2}H and {delta}{sup 18}O measurement errors ({Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O). In this note, the off-axis integrated cavity output spectroscopy (ICOS) spectra of stable isotopes in liquid water is analyzed to determine the presence of interfering absorbers that lead to erroneous isotope measurements. The baseline offsetmore » of the spectra is used to calculate a broadband spectral metric, m{sub BB}, and the mean subtracted fit residuals in two regions of interest are used to determine a narrowband metric, m{sub NB}. These metrics are used to correct for {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O. The method was tested on 14 instruments and {Delta}{delta}{sup 18}O was found to scale linearly with contaminant concentration for both narrowband (e.g., methanol) and broadband (e.g., ethanol) absorbers, while {Delta}{delta}{sup 2}H scaled linearly with narrowband and as a polynomial with broadband absorbers. Additionally, the isotope errors scaled logarithmically with m{sub NB}. Using the isotope error versus m{sub NB} and m{sub BB} curves, {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O resulting from methanol contamination were corrected to a maximum mean absolute error of 0.93 per mille and 0.25 per mille respectively, while {Delta}{delta}{sup 2}H and {Delta}{delta}{sup 18}O from ethanol contamination were corrected to a maximum mean absolute error of 1.22 per mille and 0.22 per mille . Large variation between instruments indicates that the sensitivities must be calibrated for each individual isotope analyzer. These results suggest that the properly calibrated interference metrics can be used to correct for polluted samples and extend off-axis ICOS measurements of liquid water to include plant waters, soil extracts, wastewater, and alcoholic beverages. The general technique may also be extended to other laser-based analyzers including methane and carbon dioxide isotope sensors.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verma, Prakash; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu

    Core excitation energies are computed with time-dependent density functional theory (TD-DFT) using the ionization energy corrected exchange and correlation potential QTP(0,0). QTP(0,0) provides C, N, and O K-edge spectra to about an electron volt. A mean absolute error (MAE) of 0.77 and a maximum error of 2.6 eV is observed for QTP(0,0) for many small molecules. TD-DFT based on QTP (0,0) is then used to describe the core-excitation spectra of the 22 amino acids. TD-DFT with conventional functionals greatly underestimates core excitation energies, largely due to the significant error in the Kohn-Sham occupied eigenvalues. To the contrary, the ionization energymore » corrected potential, QTP(0,0), provides excellent approximations (MAE of 0.53 eV) for core ionization energies as eigenvalues of the Kohn-Sham equations. As a consequence, core excitation energies are accurately described with QTP(0,0), as are the core ionization energies important in X-ray photoionization spectra or electron spectroscopy for chemical analysis.« less

  1. Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography

    NASA Astrophysics Data System (ADS)

    Ghaffari Razin, Mir Reza; Voosoghi, Behzad

    2016-08-01

    Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.

  2. Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data

    NASA Astrophysics Data System (ADS)

    Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.

    2015-12-01

    Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.

  3. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  4. Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling.

    PubMed

    Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant

    2016-05-15

    A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Mapping health outcome measures from a stroke registry to EQ-5D weights

    PubMed Central

    2013-01-01

    Purpose To map health outcome related variables from a national register, not part of any validated instrument, with EQ-5D weights among stroke patients. Methods We used two cross-sectional data sets including patient characteristics, outcome variables and EQ-5D weights from the national Swedish stroke register. Three regression techniques were used on the estimation set (n = 272): ordinary least squares (OLS), Tobit, and censored least absolute deviation (CLAD). The regression coefficients for “dressing“, “toileting“, “mobility”, “mood”, “general health” and “proxy-responders” were applied to the validation set (n = 272), and the performance was analysed with mean absolute error (MAE) and mean square error (MSE). Results The number of statistically significant coefficients varied by model, but all models generated consistent coefficients in terms of sign. Mean utility was underestimated in all models (least in OLS) and with lower variation (least in OLS) compared to the observed. The maximum attainable EQ-5D weight ranged from 0.90 (OLS) to 1.00 (Tobit and CLAD). Health states with utility weights <0.5 had greater errors than those with weights ≥0.5 (P < 0.01). Conclusion This study indicates that it is possible to map non-validated health outcome measures from a stroke register into preference-based utilities to study the development of stroke care over time, and to compare with other conditions in terms of utility. PMID:23496957

  6. Prediction of stream volatilization coefficients

    USGS Publications Warehouse

    Rathbun, Ronald E.

    1990-01-01

    Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.

  7. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  8. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  9. Web tools concerning performance analysis and planning support for solar energy plants starting from remotely sensed optical images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morelli, Marco, E-mail: marco.morelli1@unimi.it; Masini, Andrea, E-mail: andrea.masini@flyby.it; Ruffini, Fabrizio, E-mail: fabrizio.ruffini@i-em.eu

    We present innovative web tools, developed also in the frame of the FP7 ENDORSE (ENergy DOwnstReam SErvices) project, for the performance analysis and the support in planning of solar energy plants (PV, CSP, CPV). These services are based on the combination between the detailed physical model of each part of the plants and the near real-time satellite remote sensing of incident solar irradiance. Starting from the solar Global Horizontal Irradiance (GHI) data provided by the Monitoring Atmospheric Composition and Climate (GMES-MACC) Core Service and based on the elaboration of Meteosat Second Generation (MSG) satellite optical imagery, the Global Tilted Irradiancemore » (GTI) or the Beam Normal Irradiance (BNI) incident on plant's solar PV panels (or solar receivers for CSP or CPV) is calculated. Combining these parameters with the model of the solar power plant, using also air temperature values, we can assess in near-real-time the daily evolution of the alternate current (AC) power produced by the plant. We are therefore able to compare this satellite-based AC power yield with the actually measured one and, consequently, to readily detect any possible malfunctions and to evaluate the performances of the plant (so-called “Controller” service). Besides, the same method can be applied to satellite-based averaged environmental data (solar irradiance and air temperature) in order to provide a Return on Investment analysis in support to the planning of new solar energy plants (so-called “Planner” service). This method has been successfully applied to three test solar plants (in North, Centre and South Italy respectively) and it has been validated by comparing satellite-based and in-situ measured hourly AC power data for several months in 2013 and 2014. The results show a good accuracy: the overall Normalized Bias (NB) is − 0.41%, the overall Normalized Mean Absolute Error (NMAE) is 4.90%, the Normalized Root Mean Square Error (NRMSE) is 7.66% and the overall Correlation Coefficient (CC) is 0.9538. The maximum value of the Normalized Absolute Error (NAE) is about 30% and occurs for time periods with highly variable meteorological conditions. - Highlights: • We developed an online service (Controller) dedicated to solar energy plants real-time monitoring • We developed an online service (Planner) that supports the planning of new solar energy plants • The services are based on the elaboration of satellite optical imagery in near real-time • The validation with respect to in-situ measured hourly AC power data for three test solar plants shows good accuracy • The maximum value of the Normalized Absolute Error is about 30% and occurs for highly variable meteorological conditions.« less

  10. Demonstration of a conceptual model for using LiDAR to improve the estimation of floodwater mitigation potential of Prairie Pothole Region wetlands

    USGS Publications Warehouse

    Huang, S.; Young, Caitlin; Feng, M.; Heidemann, Hans Karl; Cushing, Matthew; Mushet, D.M.; Liu, S.

    2011-01-01

    Recent flood events in the Prairie Pothole Region of North America have stimulated interest in modeling water storage capacities of wetlands and their surrounding catchments to facilitate flood mitigation efforts. Accurate estimates of basin storage capacities have been hampered by a lack of high-resolution elevation data. In this paper, we developed a 0.5 m bare-earth model from Light Detection And Ranging (LiDAR) data and, in combination with National Wetlands Inventory data, delineated wetland catchments and their spilling points within a 196 km2 study area. We then calculated the maximum water storage capacity of individual basins and modeled the connectivity among these basins. When compared to field survey results, catchment and spilling point delineations from the LiDAR bare-earth model captured subtle landscape features very well. Of the 11 modeled spilling points, 10 matched field survey spilling points. The comparison between observed and modeled maximum water storage had an R2 of 0.87 with mean absolute error of 5564 m3. Since maximum water storage capacity of basins does not translate into floodwater regulation capability, we further developed a Basin Floodwater Regulation Index. Based upon this index, the absolute and relative water that could be held by wetlands over a landscape could be modeled. This conceptual model of floodwater downstream contribution was demonstrated with water level data from 17 May 2008.

  11. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  12. Identification and compensation of the temperature influences in a miniature three-axial accelerometer based on the least squares method

    NASA Astrophysics Data System (ADS)

    Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae

    2017-06-01

    The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.

  13. Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels

    NASA Astrophysics Data System (ADS)

    Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.

    2016-06-01

    We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.

  14. Estimation of reach in peripersonal and extrapersonal space: a developmental view.

    PubMed

    Gabbard, Carl; Cordova, Alberto; Ammar, Diala

    2007-01-01

    This study explored the developmental nature of action processing via estimation of reach in peripersonal and extrapersonal space. Children 5 to 11 years of age and adults were tested for estimates of reach to targets presented randomly at seven midline locations. Target distances were scaled to the individual based on absolute maximum reach. While there was no difference between age groups for total error, a significant distinction emerged in reference to space. With children, significantly more error was exhibited in extrapersonal space; no difference was found with adults. The groups did not differ in peripersonal space; however, adults were substantially more accurate with extrapersonal targets. Furthermore, children displayed a greater tendency to overestimate. In essence, these data reveal a body-scaling problem in children in estimating reach in extrapersonal space. Future work should focus on possible developmental differences in use of visual information and state of confidence.

  15. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    PubMed

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  16. Short-Term Forecasting of Loads and Wind Power for Latvian Power System: Accuracy and Capacity of the Developed Tools

    NASA Astrophysics Data System (ADS)

    Radziukynas, V.; Klementavičius, A.

    2016-04-01

    The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).

  17. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  18. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  19. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  20. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  1. Improved simulation of aerosol, cloud, and density measurements by shuttle lidar

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Livingston, J. M.; Grams, G. W.; Patterson, E. W.

    1981-01-01

    Data retrievals are simulated for a Nd:YAG lidar suitable for early flight on the space shuttle. Maximum assumed vertical and horizontal resolutions are 0.1 and 100 km, respectively, in the boundary layer, increasing to 2 and 2000 km in the mesosphere. Aerosol and cloud retrievals are simulated using 1.06 and 0.53 microns wavelengths independently. Error sources include signal measurement, conventional density information, atmospheric transmission, and lidar calibration. By day, tenuous clouds and Saharan and boundary layer aerosols are retrieved at both wavelengths. By night, these constituents are retrieved, plus upper tropospheric, stratospheric, and mesospheric aerosols and noctilucent clouds. Density, temperature, and improved aerosol and cloud retrievals are simulated by combining signals at 0.35, 1.06, and 0.53 microns. Particlate contamination limits the technique to the cloud free upper troposphere and above. Error bars automatically show effect of this contamination, as well as errors in absolute density nonmalization, reference temperature or pressure, and the sources listed above. For nonvolcanic conditions, relative density profiles have rms errors of 0.54 to 2% in the upper troposphere and stratosphere. Temperature profiles have rms errors of 1.2 to 2.5 K and can define the tropopause to 0.5 km and higher wave structures to 1 or 2 km.

  2. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  3. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  4. Evaluation of measurement errors of temperature and relative humidity from HOBO data logger under different conditions of exposure to solar radiation.

    PubMed

    da Cunha, Antonio Ribeiro

    2015-05-01

    This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.

  5. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  6. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  7. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  8. Increasing the applicability of density functional theory. V. X-ray absorption spectra with ionization potential corrected exchange and correlation potentials.

    PubMed

    Verma, Prakash; Bartlett, Rodney J

    2016-07-21

    Core excitation energies are computed with time-dependent density functional theory (TD-DFT) using the ionization energy corrected exchange and correlation potential QTP(0,0). QTP(0,0) provides C, N, and O K-edge spectra to about an electron volt. A mean absolute error (MAE) of 0.77 and a maximum error of 2.6 eV is observed for QTP(0,0) for many small molecules. TD-DFT based on QTP (0,0) is then used to describe the core-excitation spectra of the 22 amino acids. TD-DFT with conventional functionals greatly underestimates core excitation energies, largely due to the significant error in the Kohn-Sham occupied eigenvalues. To the contrary, the ionization energy corrected potential, QTP(0,0), provides excellent approximations (MAE of 0.53 eV) for core ionization energies as eigenvalues of the Kohn-Sham equations. As a consequence, core excitation energies are accurately described with QTP(0,0), as are the core ionization energies important in X-ray photoionization spectra or electron spectroscopy for chemical analysis.

  9. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.

  10. The precision and reliability evaluation of 3-dimensional printed damaged bone and prosthesis models by stereo lithography appearance

    PubMed Central

    Zou, Yun; Han, Qing; Weng, Xisheng; Zou, Yongwei; Yang, Yingying; Zhang, Kesong; Yang, Kerong; Xu, Xiaolin; Wang, Chenyu; Qin, Yanguo; Wang, Jincheng

    2018-01-01

    Abstract Recently, clinical application of 3D printed model was increasing. However, there was no systemic study for confirming the precision and reliability of 3D printed model. Some senior clinical doctors mistrusted its reliability in clinical application. The purpose of this study was to evaluate the precision and reliability of stereolithography appearance (SLA) 3D printed model. Some related parameters were selected to research the reliability of SLA 3D printed model. The computed tomography (CT) data of bone/prosthesis and model were collected and 3D reconstructed. Some anatomical parameters were measured and statistical analysis was performed; the intraclass correlation coefficient (ICC) was used to was used to evaluate the similarity between the model and real bone/prosthesis. the absolute difference (mm) and relative difference (%) were conducted. For prosthesis model, the 3-dimensional error was measured. There was no significant difference in the anatomical parameters except max height (MH) of long bone. All the ICCs were greater than 0.990. The maximum absolute and relative difference were 0.45 mm and 1.10%; The 3-dimensional error analysis showed that positive/minus distance were 0.273 mm/0.237 mm. The application of SLA 3D printed model in diagnosis and treatment process of complex orthopedic disease was reliable and precise. PMID:29419675

  11. The precision and reliability evaluation of 3-dimensional printed damaged bone and prosthesis models by stereo lithography appearance.

    PubMed

    Zou, Yun; Han, Qing; Weng, Xisheng; Zou, Yongwei; Yang, Yingying; Zhang, Kesong; Yang, Kerong; Xu, Xiaolin; Wang, Chenyu; Qin, Yanguo; Wang, Jincheng

    2018-02-01

    Recently, clinical application of 3D printed model was increasing. However, there was no systemic study for confirming the precision and reliability of 3D printed model. Some senior clinical doctors mistrusted its reliability in clinical application. The purpose of this study was to evaluate the precision and reliability of stereolithography appearance (SLA) 3D printed model.Some related parameters were selected to research the reliability of SLA 3D printed model. The computed tomography (CT) data of bone/prosthesis and model were collected and 3D reconstructed. Some anatomical parameters were measured and statistical analysis was performed; the intraclass correlation coefficient (ICC) was used to was used to evaluate the similarity between the model and real bone/prosthesis. the absolute difference (mm) and relative difference (%) were conducted. For prosthesis model, the 3-dimensional error was measured.There was no significant difference in the anatomical parameters except max height (MH) of long bone. All the ICCs were greater than 0.990. The maximum absolute and relative difference were 0.45 mm and 1.10%; The 3-dimensional error analysis showed that positive/minus distance were 0.273 mm/0.237 mm.The application of SLA 3D printed model in diagnosis and treatment process of complex orthopedic disease was reliable and precise.

  12. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.

  13. Ultrasound virtual endoscopy: Polyp detection and reliability of measurement in an in vitro study with pig intestine specimens

    PubMed Central

    Liu, Jin-Ya; Chen, Li-Da; Cai, Hua-Song; Liang, Jin-Yu; Xu, Ming; Huang, Yang; Li, Wei; Feng, Shi-Ting; Xie, Xiao-Yan; Lu, Ming-De; Wang, Wei

    2016-01-01

    AIM: To present our initial experience regarding the feasibility of ultrasound virtual endoscopy (USVE) and its measurement reliability for polyp detection in an in vitro study using pig intestine specimens. METHODS: Six porcine intestine specimens containing 30 synthetic polyps underwent USVE, computed tomography colonography (CTC) and optical colonoscopy (OC) for polyp detection. The polyp measurement defined as the maximum polyp diameter on two-dimensional (2D) multiplanar reformatted (MPR) planes was obtained by USVE, and the absolute measurement error was analyzed using the direct measurement as the reference standard. RESULTS: USVE detected 29 (96.7%) of 30 polyps, remaining a 7-mm one missed. There was one false-positive finding. Twenty-six (89.7%) of 29 reconstructed images were clearly depicted, while 29 (96.7%) of 30 polyps were displayed on CTC with one false-negative finding. In OC, all the polyps were detected. The intraclass correlation coefficient was 0.876 (95%CI: 0.745-0.940) for measurements obtained with USVE. The pooled absolute measurement errors ± the standard deviations of the depicted polyps with actual sizes ≤ 5 mm, 6-9 mm, and ≥ 10 mm were 1.9 ± 0.8 mm, 0.9 ± 1.2 mm, and 1.0 ± 1.4 mm, respectively. CONCLUSION: USVE is reliable for polyp detection and measurement in in vitro study. PMID:27022217

  14. Assessment of errors in static electrical impedance tomography with adjacent and trigonometric current patterns.

    PubMed

    Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P

    1997-11-01

    In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.

  15. Pyrometer with tracking balancing

    NASA Astrophysics Data System (ADS)

    Ponomarev, D. B.; Zakharenko, V. A.; Shkaev, A. G.

    2018-04-01

    Currently, one of the main metrological noncontact temperature measurement challenges is the emissivity uncertainty. This paper describes a pyrometer with emissivity effect diminishing through the use of a measuring scheme with tracking balancing in which the radiation receiver is a null-indicator. In this paper the results of the prototype pyrometer absolute error study in surfaces temperature measurement of aluminum and nickel samples are presented. There is absolute error calculated values comparison considering the emissivity table values with errors on the results of experimental measurements by the proposed method. The practical implementation of the proposed technical solution has allowed two times to reduce the error due to the emissivity uncertainty.

  16. Uncertainty analysis technique for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Widmann, K.; Sorce, C.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  17. Uncertainty Analysis Technique for OMEGA Dante Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M J; Widmann, K; Sorce, C

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  18. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    NASA Astrophysics Data System (ADS)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  19. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  20. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  2. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    NASA Astrophysics Data System (ADS)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  3. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  4. Demand Controlled Economizer Cycles: A Direct Digital Control Scheme for Heating, Ventilating, and Air Conditioning Systems,

    DTIC Science & Technology

    1984-05-01

    Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F

  5. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  6. A review on Black-Scholes model in pricing warrants in Bursa Malaysia

    NASA Astrophysics Data System (ADS)

    Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul

    2017-01-01

    This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.

  7. Prediction of pKa Values for Neutral and Basic Drugs based on Hybrid Artificial Intelligence Methods.

    PubMed

    Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin

    2018-03-05

    The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.

  8. Validity of the top-down approach of inverse dynamics analysis in fast and large rotational trunk movements.

    PubMed

    Iino, Yoichi; Kojima, Takeji

    2012-08-01

    This study investigated the validity of the top-down approach of inverse dynamics analysis in fast and large rotational movements of the trunk about three orthogonal axes of the pelvis for nine male collegiate students. The maximum angles of the upper trunk relative to the pelvis were approximately 47°, 49°, 32°, and 55° for lateral bending, flexion, extension, and axial rotation, respectively, with maximum angular velocities of 209°/s, 201°/s, 145°/s, and 288°/s, respectively. The pelvic moments about the axes during the movements were determined using the top-down and bottom-up approaches of inverse dynamics and compared between the two approaches. Three body segment inertial parameter sets were estimated using anthropometric data sets (Ae et al., Biomechanism 11, 1992; De Leva, J Biomech, 1996; Dumas et al., J Biomech, 2007). The root-mean-square errors of the moments and the absolute errors of the peaks of the moments were generally smaller than 10 N·m. The results suggest that the pelvic moment in motions involving fast and large trunk movements can be determined with a certain level of validity using the top-down approach in which the trunk is modeled as two or three rigid-link segments.

  9. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  10. Optical Coherence Tomography–Based Corneal Power Measurement and Intraocular Lens Power Calculation Following Laser Vision Correction (An American Ophthalmological Society Thesis)

    PubMed Central

    Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.

    2013-01-01

    Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323

  11. Problems and methods of calculating the Legendre functions of arbitrary degree and order

    NASA Astrophysics Data System (ADS)

    Novikova, Elena; Dmitrenko, Alexander

    2016-12-01

    The known standard recursion methods of computing the full normalized associated Legendre functions do not give the necessary precision due to application of IEEE754-2008 standard, that creates a problems of underflow and overflow. The analysis of the problems of the calculation of the Legendre functions shows that the problem underflow is not dangerous by itself. The main problem that generates the gross errors in its calculations is the problem named the effect of "absolute zero". Once appeared in a forward column recursion, "absolute zero" converts to zero all values which are multiplied by it, regardless of whether a zero result of multiplication is real or not. Three methods of calculating of the Legendre functions, that removed the effect of "absolute zero" from the calculations are discussed here. These methods are also of interest because they almost have no limit for the maximum degree of Legendre functions. It is shown that the numerical accuracy of these three methods is the same. But, the CPU calculation time of the Legendre functions with Fukushima method is minimal. Therefore, the Fukushima method is the best. Its main advantage is computational speed which is an important factor in calculation of such large amount of the Legendre functions as 2 401 336 for EGM2008.

  12. Relevant reduction effect with a modified thermoplastic mask of rotational error for glottic cancer in IMRT

    NASA Astrophysics Data System (ADS)

    Jung, Jae Hong; Jung, Joo-Young; Cho, Kwang Hwan; Ryu, Mi Ryeong; Bae, Sun Hyun; Moon, Seong Kwon; Kim, Yong Ho; Choe, Bo-Young; Suh, Tae Suk

    2017-02-01

    The purpose of this study was to analyze the glottis rotational error (GRE) by using a thermoplastic mask for patients with the glottic cancer undergoing intensity-modulated radiation therapy (IMRT). We selected 20 patients with glottic cancer who had received IMRT by using the tomotherapy. The image modalities with both kilovoltage computed tomography (planning kVCT) and megavoltage CT (daily MVCT) images were used for evaluating the error. Six anatomical landmarks in the image were defined to evaluate a correlation between the absolute GRE (°) and the length of contact with the underlying skin of the patient by the mask (mask, mm). We also statistically analyzed the results by using the Pearson's correlation coefficient and a linear regression analysis ( P <0.05). The mask and the absolute GRE were verified to have a statistical correlation ( P < 0.01). We found a statistical significance for each parameter in the linear regression analysis (mask versus absolute roll: P = 0.004 [ P < 0.05]; mask versus 3D-error: P = 0.000 [ P < 0.05]). The range of the 3D-errors with contact by the mask was from 1.2% - 39.7% between the maximumand no-contact case in this study. A thermoplastic mask with a tight, increased contact area may possibly contribute to the uncertainty of the reproducibility as a variation of the absolute GRE. Thus, we suggest that a modified mask, such as one that covers only the glottis area, can significantly reduce the patients' setup errors during the treatment.

  13. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    NASA Astrophysics Data System (ADS)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  14. Spinal intra-operative three-dimensional navigation with infra-red tool tracking: correlation between clinical and absolute engineering accuracy

    NASA Astrophysics Data System (ADS)

    Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.

  15. Predicting Air Permeability of Handloom Fabrics: A Comparative Analysis of Regression and Artificial Neural Network Models

    NASA Astrophysics Data System (ADS)

    Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya

    2013-03-01

    This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.

  16. The PMA Catalogue: 420 million positions and absolute proper motions

    NASA Astrophysics Data System (ADS)

    Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.

    2017-07-01

    We present a catalogue that contains about 420 million absolute proper motions of stars. It was derived from the combination of positions from Gaia DR1 and 2MASS, with a mean difference of epochs of about 15 yr. Most of the systematic zonal errors inherent in the 2MASS Catalogue were eliminated before deriving the absolute proper motions. The absolute calibration procedure (zero-pointing of the proper motions) was carried out using about 1.6 million positions of extragalactic sources. The mean formal error of the absolute calibration is less than 0.35 mas yr-1. The derived proper motions cover the whole celestial sphere without gaps for a range of stellar magnitudes from 8 to 21 mag. In the sky areas where the extragalactic sources are invisible (the avoidance zone), a dedicated procedure was used that transforms the relative proper motions into absolute ones. The rms error of proper motions depends on stellar magnitude and ranges from 2-5 mas yr-1 for stars with 10 mag < G < 17 mag to 5-10 mas yr-1 for faint ones. The present catalogue contains the Gaia DR1 positions of stars for the J2015 epoch. The system of the PMA proper motions does not depend on the systematic errors of the 2MASS positions, and in the range from 14 to 21 mag represents an independent realization of a quasi-inertial reference frame in the optical and near-infrared wavelength range. The Catalogue also contains stellar magnitudes taken from the Gaia DR1 and 2MASS catalogues. A comparison of the PMA proper motions of stars with similar data from certain recent catalogues has been undertaken.

  17. Very Fast Estimation of Epicentral Distance and Magnitude from a Single Three Component Seismic Station Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Ochoa Gutierrez, L. H.; Niño Vasquez, L. F.; Vargas-Jimenez, C. A.

    2012-12-01

    To minimize adverse effects originated by high magnitude earthquakes, early warning has become a powerful tool to anticipate a seismic wave arrival to an specific location and lets to bring people and government agencies opportune information to initiate a fast response. To do this, a very fast and accurate characterization of the event must be done but this process is often made using seismograms recorded in at least 4 stations where processing time is usually greater than the wave travel time to the interest area, mainly in coarse networks. A faster process can be done if only one three component seismic station is used that is the closest unsaturated station respect to the epicenter. Here we present a Support Vector Regression algorithm which calculates Magnitude and Epicentral Distance using only 5 seconds of signal since P wave onset. This algorithm was trained with 36 records of historical earthquakes where the input were regression parameters of an exponential function estimated by least squares, corresponding to the waveform envelope and the maximum value of the observed waveform for each component in one single station. A 10 fold Cross Validation was applied for a Normalized Polynomial Kernel obtaining the mean absolute error for different exponents and complexity parameters. Magnitude could be estimated with 0.16 of mean absolute error and the distance with an error of 7.5 km for distances within 60 to 120 km. This kind of algorithm is easy to implement in hardware and can be used directly in the field station to make possible the broadcast of estimations of this values to generate fast decisions at seismological control centers, increasing the possibility to have an effective reactiontribute and Descriptors calculator for SVR model training and test

  18. A hybrid SVM-FFA method for prediction of monthly mean global solar radiation

    NASA Astrophysics Data System (ADS)

    Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Zamani, Mazdak; Motamedi, Shervin; Ch, Sudheer

    2016-07-01

    In this study, a hybrid support vector machine-firefly optimization algorithm (SVM-FFA) model is proposed to estimate monthly mean horizontal global solar radiation (HGSR). The merit of SVM-FFA is assessed statistically by comparing its performance with three previously used approaches. Using each approach and long-term measured HGSR, three models are calibrated by considering different sets of meteorological parameters measured for Bandar Abbass situated in Iran. It is found that the model (3) utilizing the combination of relative sunshine duration, difference between maximum and minimum temperatures, relative humidity, water vapor pressure, average temperature, and extraterrestrial solar radiation shows superior performance based upon all approaches. Moreover, the extraterrestrial radiation is introduced as a significant parameter to accurately estimate the global solar radiation. The survey results reveal that the developed SVM-FFA approach is greatly capable to provide favorable predictions with significantly higher precision than other examined techniques. For the SVM-FFA (3), the statistical indicators of mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and coefficient of determination ( R 2) are 3.3252 %, 0.1859 kWh/m2, 3.7350 %, and 0.9737, respectively which according to the RRMSE has an excellent performance. As a more evaluation of SVM-FFA (3), the ratio of estimated to measured values is computed and found that 47 out of 48 months considered as testing data fall between 0.90 and 1.10. Also, by performing a further verification, it is concluded that SVM-FFA (3) offers absolute superiority over the empirical models using relatively similar input parameters. In a nutshell, the hybrid SVM-FFA approach would be considered highly efficient to estimate the HGSR.

  19. Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System

    NASA Technical Reports Server (NTRS)

    Pfenninger, W. Matthew; Papen, George C.

    1992-01-01

    Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.

  20. A distance-independent calibration of the luminosity of type Ia supernovae and the Hubble constant

    NASA Technical Reports Server (NTRS)

    Leibundgut, Bruno; Pinto, Philip A.

    1992-01-01

    The absolute magnitude of SNe Ia at maximum is calibrated here using radioactive decay models for the light curve and a minimum of assumptions. The absolute magnitude parameter space is studied using explosion models and a range of rise times, and absolute B magnitudes at maximum are used to derive a range of the H0 and the distance to the Virgo Cluster from SNe Ia. Rigorous limits for H0 of 45 and 105 km/s/Mpc are derived.

  1. A simplified model of a mechanical cooling tower with both a fill pack and a coil

    NASA Astrophysics Data System (ADS)

    Van Riet, Freek; Steenackers, Gunther; Verhaert, Ivan

    2017-11-01

    Cooling accounts for a large amount of the global primary energy consumption in buildings and industrial processes. A substantial part of this cooling demand is produced by mechanical cooling towers. Simulations benefit the sizing and integration of cooling towers in overall cooling networks. However, for these simulations fast-to-calculate and easy-to-parametrize models are required. In this paper, a new model is developed for a mechanical draught cooling tower with both a cooling coil and a fill pack. The model needs manufacturers' performance data at only three operational states (at varying air and water flow rates) to be parametrized. The model predicts the cooled, outgoing water temperature. These predictions were compared with experimental data for a wide range of operational states. The model was able to predict the temperature with a maximum absolute error of 0.59°C. The relative error of cooling capacity was mostly between ±5%.

  2. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  3. Predictability of the Arctic sea ice edge

    NASA Astrophysics Data System (ADS)

    Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.

    2016-02-01

    Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellefson, S; Department of Human Oncology, University of Wisconsin, Madison, WI; Culberson, W

    Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions weremore » chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the University of Wisconsin Medical Radiation Research Center.« less

  5. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  6. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.

    PubMed

    Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank

    2017-07-07

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  7. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  8. A New Method of Using Sensor Arrays for Gas Leakage Location Based on Correlation of the Time-Space Domain of Continuous Ultrasound

    PubMed Central

    Bian, Xu; Zhang, Yu; Li, Yibo; Gong, Xiaoyue; Jin, Shijiu

    2015-01-01

    This paper proposes a time-space domain correlation-based method for gas leakage detection and location. It acquires the propagated signal on the skin of the plate by using a piezoelectric acoustic emission (AE) sensor array. The signal generated from the gas leakage hole (which diameter is less than 2 mm) is time continuous. By collecting and analyzing signals from different sensors’ positions in the array, the correlation among those signals in the time-space domain can be achieved. Then, the directional relationship between the sensor array and the leakage source can be calculated. The method successfully solves the real-time orientation problem of continuous ultrasonic signals generated from leakage sources (the orientation time is about 15 s once), and acquires high accuracy location information of leakage sources by the combination of multiple sets of orientation results. According to the experimental results, the mean value of the location absolute error is 5.83 mm on a one square meter plate, and the maximum location error is generally within a ±10 mm interval. Meanwhile, the error variance is less than 20.17. PMID:25860070

  9. A new method of using sensor arrays for gas leakage location based on correlation of the time-space domain of continuous ultrasound.

    PubMed

    Bian, Xu; Zhang, Yu; Li, Yibo; Gong, Xiaoyue; Jin, Shijiu

    2015-04-09

    This paper proposes a time-space domain correlation-based method for gas leakage detection and location. It acquires the propagated signal on the skin of the plate by using a piezoelectric acoustic emission (AE) sensor array. The signal generated from the gas leakage hole (which diameter is less than 2 mm) is time continuous. By collecting and analyzing signals from different sensors' positions in the array, the correlation among those signals in the time-space domain can be achieved. Then, the directional relationship between the sensor array and the leakage source can be calculated. The method successfully solves the real-time orientation problem of continuous ultrasonic signals generated from leakage sources (the orientation time is about 15 s once), and acquires high accuracy location information of leakage sources by the combination of multiple sets of orientation results. According to the experimental results, the mean value of the location absolute error is 5.83 mm on a one square meter plate, and the maximum location error is generally within a ±10 mm interval. Meanwhile, the error variance is less than 20.17.

  10. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  11. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  12. Smart Multi-Frequency Bioelectrical Impedance Spectrometer for BIA and BIVA Applications.

    PubMed

    Harder, Rene; Diedrich, Andre; Whitfield, Jonathan S; Buchowski, Macie S; Pietsch, John B; Baudenbacher, Franz J

    2016-08-01

    Bioelectrical impedance analysis (BIA) is a noninvasive and commonly used method for the assessment of body composition including body water. We designed a small, portable and wireless multi-frequency impedance spectrometer based on the 12 bit impedance network analyzer AD5933 and a precision wide-band constant current source for tetrapolar whole body impedance measurements. The impedance spectrometer communicates via Bluetooth with mobile devices (smart phone or tablet computer) that provide user interface for patient management and data visualization. The export of patient measurement results into a clinical research database facilitates the aggregation of bioelectrical impedance analysis and biolectrical impedance vector analysis (BIVA) data across multiple subjects and/or studies. The performance of the spectrometer was evaluated using a passive tissue equivalent circuit model as well as a comparison of body composition changes assessed with bioelectrical impedance and dual-energy X-ray absorptiometry (DXA) in healthy volunteers. Our results show an absolute error of 1% for resistance and 5% for reactance measurements in the frequency range of 3 kHz to 150 kHz. A linear regression of BIA and DXA fat mass estimations showed a strong correlation (r(2)=0.985) between measures with a maximum absolute error of 6.5%. The simplicity of BIA measurements, a cost effective design and the simple visual representation of impedance data enables patients to compare and determine body composition during the time course of a specific treatment plan in a clinical or home environment.

  13. Predicting clicks of PubMed articles.

    PubMed

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.

  14. Predicting clicks of PubMed articles

    PubMed Central

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386

  15. Fault Identification Based on Nlpca in Complex Electrical Engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping; Zhang, Jinfang

    2012-07-01

    The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.

  16. Wideband Arrhythmia-Insensitive-Rapid (AIR) Pulse Sequence for Cardiac T1 mapping without Image Artifacts induced by ICD

    PubMed Central

    Hong, KyungPyo; Jeong, Eun-Kee; Wall, T. Scott; Drakos, Stavros G.; Kim, Daniel

    2015-01-01

    Purpose To develop and evaluate a wideband arrhythmia-insensitive-rapid (AIR) pulse sequence for cardiac T1 mapping without image artifacts induced by implantable-cardioverter-defibrillator (ICD). Methods We developed a wideband AIR pulse sequence by incorporating a saturation pulse with wide frequency bandwidth (8.9 kHz), in order to achieve uniform T1 weighting in the heart with ICD. We tested the performance of original and “wideband” AIR cardiac T1 mapping pulse sequences in phantom and human experiments at 1.5T. Results In 5 phantoms representing native myocardium and blood and post-contrast blood/tissue T1 values, compared with the control T1 values measured with an inversion-recovery pulse sequence without ICD, T1 values measured with original AIR with ICD were considerably lower (absolute percent error >29%), whereas T1 values measured with wideband AIR with ICD were similar (absolute percent error <5%). Similarly, in 11 human subjects, compared with the control T1 values measured with original AIR without ICD, T1 measured with original AIR with ICD was significantly lower (absolute percent error >10.1%), whereas T1 measured with wideband AIR with ICD was similar (absolute percent error <2.0%). Conclusion This study demonstrates the feasibility of a wideband pulse sequence for cardiac T1 mapping without significant image artifacts induced by ICD. PMID:25975192

  17. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.

  18. Reliability study of biometrics "do not contact" in myopia.

    PubMed

    Migliorini, R; Fratipietro, M; Comberiati, A M; Pattavina, L; Arrico, L

    The aim of the study is a comparison between the actually achieved after surgery condition versus the expected refractive condition of the eye as calculated via a biometer. The study was conducted in a random group of 38 eyes of patients undergoing surgery by phacoemulsification. The mean absolute error was calculated between the predicted values from the measurements with the optical biometer and those obtained in the post-operative error which was at around 0.47% Our study shows results not far from those reported in the literature, and in relation, to the mean absolute error is among the lowest values at 0.47 ± 0.11 SEM.

  19. Automated estimation of abdominal effective diameter for body size normalization of CT dose.

    PubMed

    Cheng, Phillip M

    2013-06-01

    Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.

  20. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987

  1. SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M

    Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less

  2. Soil and air temperatures for different habitats in Mount Rainier National Park.

    Treesearch

    Sarah E. Greene; Mark Klopsch

    1985-01-01

    This paper reports air and soil temperature data from 10 sites in Mount Rainier National Park in Washington State for 2- to 5-year periods. Data provided are monthly summaries for day and night mean air temperatures, mean minimum and maximum air temperatures, absolute minimum and maximum air temperatures, range of air temperatures, mean soil temperature, and absolute...

  3. Comparison of the Pentacam equivalent keratometry reading and IOL Master keratometry measurement in intraocular lens power calculations.

    PubMed

    Karunaratne, Nicholas

    2013-12-01

    To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  4. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    NASA Astrophysics Data System (ADS)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  5. Improved model of the retardance in citric acid coated ferrofluids using stepwise regression

    NASA Astrophysics Data System (ADS)

    Lin, J. F.; Qiu, X. R.

    2017-06-01

    Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.

  6. The modelling of lead removal from water by deep eutectic solvents functionalized CNTs: artificial neural network (ANN) approach.

    PubMed

    Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed

    2017-11-01

    The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.

  7. Numerical Experiments in Error Control for Sound Propagation Using a Damping Layer Boundary Treatment

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    2017-01-01

    This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].

  8. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  9. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  10. Estimation of lower flammability limits of C-H compounds in air at atmospheric pressure, evaluation of temperature dependence and diluent effect.

    PubMed

    Mendiburu, Andrés Z; de Carvalho, João A; Coronado, Christian R

    2015-03-21

    Estimation of the lower flammability limits of C-H compounds at 25 °C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 C-H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C-H compounds. When tested for a temperature range from 5 °C to 100 °C, the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)

    NASA Technical Reports Server (NTRS)

    Geist, J.

    1972-01-01

    A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.

  12. The effect of modeled absolute timing variability and relative timing variability on observational learning.

    PubMed

    Grierson, Lawrence E M; Roberts, James W; Welsher, Arthur M

    2017-05-01

    There is much evidence to suggest that skill learning is enhanced by skill observation. Recent research on this phenomenon indicates a benefit of observing variable/erred demonstrations. In this study, we explore whether it is variability within the relative organization or absolute parameterization of a movement that facilitates skill learning through observation. To do so, participants were randomly allocated into groups that observed a model with no variability, absolute timing variability, relative timing variability, or variability in both absolute and relative timing. All participants performed a four-segment movement pattern with specific absolute and relative timing goals prior to and following the observational intervention, as well as in a 24h retention test and transfers tests that featured new relative and absolute timing goals. Absolute timing error indicated that all groups initially acquired the absolute timing, maintained their performance at 24h retention, and exhibited performance deterioration in both transfer tests. Relative timing error revealed that the observation of no variability and relative timing variability produced greater performance at the post-test, 24h retention and relative timing transfer tests, but for the no variability group, deteriorated at absolute timing transfer test. The results suggest that the learning of absolute timing following observation unfolds irrespective of model variability. However, the learning of relative timing benefits from holding the absolute features constant, while the observation of no variability partially fails in transfer. We suggest learning by observing no variability and variable/erred models unfolds via similar neural mechanisms, although the latter benefits from the additional coding of information pertaining to movements that require a correction. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-07-01

    Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.

  14. Quantum chemical approach for positron annihilation spectra of atoms and molecules beyond plane-wave approximation

    NASA Astrophysics Data System (ADS)

    Ikabata, Yasuhiro; Aiba, Risa; Iwanade, Toru; Nishizawa, Hiroaki; Wang, Feng; Nakai, Hiromi

    2018-05-01

    We report theoretical calculations of positron-electron annihilation spectra of noble gas atoms and small molecules using the nuclear orbital plus molecular orbital method. Instead of a nuclear wavefunction, the positronic wavefunction is obtained as the solution of the coupled Hartree-Fock or Kohn-Sham equation for a positron and the electrons. The molecular field is included in the positronic Fock operator, which allows an appropriate treatment of the positron-molecule repulsion. The present treatment succeeds in reproducing the Doppler shift, i.e., full width at half maximum (FWHM) of experimentally measured annihilation (γ-ray) spectra for molecules with a mean absolute error less than 10%. The numerical results indicate that the interpretation of the FWHM in terms of a specific molecular orbital is not appropriate.

  15. Modeling studies of landfalling atmospheric rivers and orographic precipitation over northern California

    NASA Astrophysics Data System (ADS)

    Eiserloh, Arthur J.; Chiao, Sen

    2015-02-01

    This study investigated a slow-moving long-wave trough that brought four Atmospheric Rivers (AR) "episodes" within a week to the U.S. West Coast from 28 November to 3 December 2012, bringing over 500 mm to some coastal locations. The highest 6- and 12-hourly rainfall rates (131 and 195 mm, respectively) over northern California occurred during Episode 2 along the windward slopes of the coastal Santa Lucia Mountains. Surface observations from NOAA's Hydrometeorological Testbed sites in California, available GPS Radio Occultation (RO) vertical profiles from the Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC) satellite mission were both assimilated into WRF-ARW via eight combinations of observation nudging, grid nudging, and 3DVAR to improve the upstream moisture characteristics and quantitative precipitation forecast (QPF) during this event. Results during the 6-hourly rainfall maximum period in Episode 2 revealed that the models underestimated the observed 6-hourly rainfall rate maximum on the windward slopes of the Santa Lucia mountain range. The grid-nudging experiments smoothed out finer mesoscale details in the inner domain that may affect the final QPFs. Overall, the experiments that did not use grid nudging were more accurate in terms of less mean absolute error. In the time evolution of the accumulated rainfall forecast, the observation nudging experiment that included RAOB and COSMIC GPS RO data demonstrated results with the least error for the north central Coastal Range and the 3DVAR cold-start experiment demonstrated the least error for the windward Sierra Nevada. The experiment that combined 3DVAR cold start, observation nudging, and grid nudging showed the most error in the rainfall forecasts. Results from this study further suggest that including surface observations at frequencies less than 3 h for observation nudging and having cycling intervals less than 3 h for 3DVAR cycling would be more beneficial for short-to-medium range mesoscale QPFs during high-impact AR events over northern California.

  16. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Commissioning and validation of fluence-based 3D VMAT dose reconstruction system using new transmission detector.

    PubMed

    Nakaguchi, Yuji; Oono, Takeshi; Maruyama, Masato; Shimohigashi, Yoshinobu; Kai, Yudai; Nakamura, Yuya

    2018-06-01

    In this study, we evaluated the basic performance of the three-dimensional dose verification system COMPASS (IBA Dosimetry). This system is capable of reconstructing 3D dose distributions on the patient anatomy based on the fluence measured using a new transmission detector (Dolphin, IBA Dosimetry) during treatment. The stability of the absolute dose and geometric calibrations of the COMPASS system with the Dolphin detector were investigated for fundamental validation. Furthermore, multileaf collimator (MLC) test patterns and a complicated volumetric modulated arc therapy (VMAT) plan were used to evaluate the accuracy of the reconstructed dose distributions determined by the COMPASS. The results from the COMPASS were compared with those of a Monte Carlo simulation (MC), EDR2 film measurement, and a treatment planning system (TPS). The maximum errors for the absolute dose and geometrical position were - 0.28% and 1.0 mm for 3 months, respectively. The Dolphin detector, which consists of ionization chamber detectors, was firmly mounted on the linear accelerator and was very stable. For the MLC test patterns, the TPS showed a > 5% difference at small fields, while the COMPASS showed good agreement with the MC simulation at small fields. However, the COMPASS produced a large error for complex small fields. For a clinical VMAT plan, COMPASS was more accurate than TPS. COMPASS showed real delivered-dose distributions because it uses the measured fluence, a high-resolution detector, and accurate beam modeling. We confirm here that the accuracy and detectability of the delivered dose of the COMPASS system are sufficient for clinical practice.

  18. Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty

    2017-09-01

    In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.

  19. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Special electronic distance meter calibration for precise engineering surveying industrial applications

    NASA Astrophysics Data System (ADS)

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf

    2015-05-01

    All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.

  1. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  2. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    PubMed

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Validation of SenseWear Armband in children, adolescents, and adults.

    PubMed

    Lopez, G A; Brønd, J C; Andersen, L B; Dencker, M; Arvidsson, D

    2018-02-01

    SenseWear Armband (SW) is a multisensor monitor to assess physical activity and energy expenditure. Its prediction algorithms have been updated periodically. The aim was to validate SW in children, adolescents, and adults. The most recent SW algorithm 5.2 (SW5.2) and the previous version 2.2 (SW2.2) were evaluated for estimation of energy expenditure during semi-structured activities in 35 children, 31 adolescents, and 36 adults with indirect calorimetry as reference. Energy expenditure estimated from waist-worn ActiGraph GT3X+ data (AG) was used for comparison. Improvements in measurement errors were demonstrated with SW5.2 compared to SW2.2, especially in children and for biking. The overall mean absolute percent error with SW5.2 was 24% in children, 23% in adolescents, and 20% in adults. The error was larger for sitting and standing (23%-32%) and for basketball and biking (19%-35%), compared to walking and running (8%-20%). The overall mean absolute error with AG was 28% in children, 22% in adolescents, and 28% in adults. The absolute percent error for biking was 32%-74% with AG. In general, SW and AG underestimated energy expenditure. However, both methods demonstrated a proportional bias, with increasing underestimation for increasing energy expenditure level, in addition to the large individual error. SW provides measures of energy expenditure level with similar accuracy in children, adolescents, and adults with the improvements in the updated algorithms. Although SW captures biking better than AG, these methods share remaining measurements errors requiring further improvements for accurate measures of physical activity and energy expenditure in clinical and epidemiological research. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Models for estimating daily rainfall erosivity in China

    NASA Astrophysics Data System (ADS)

    Xie, Yun; Yin, Shui-qing; Liu, Bao-yuan; Nearing, Mark A.; Zhao, Ying

    2016-04-01

    The rainfall erosivity factor (R) represents the multiplication of rainfall energy and maximum 30 min intensity by event (EI30) and year. This rainfall erosivity index is widely used for empirical soil loss prediction. Its calculation, however, requires high temporal resolution rainfall data that are not readily available in many parts of the world. The purpose of this study was to parameterize models suitable for estimating erosivity from daily rainfall data, which are more widely available. One-minute resolution rainfall data recorded in sixteen stations over the eastern water erosion impacted regions of China were analyzed. The R-factor ranged from 781.9 to 8258.5 MJ mm ha-1 h-1 y-1. A total of 5942 erosive events from one-minute resolution rainfall data of ten stations were used to parameterize three models, and 4949 erosive events from the other six stations were used for validation. A threshold of daily rainfall between days classified as erosive and non-erosive was suggested to be 9.7 mm based on these data. Two of the models (I and II) used power law functions that required only daily rainfall totals. Model I used different model coefficients in the cool season (Oct.-Apr.) and warm season (May-Sept.), and Model II was fitted with a sinusoidal curve of seasonal variation. Both Model I and Model II estimated the erosivity index for average annual, yearly, and half-month temporal scales reasonably well, with the symmetric mean absolute percentage error MAPEsym ranging from 10.8% to 32.1%. Model II predicted slightly better than Model I. However, the prediction efficiency for the daily erosivity index was limited, with the symmetric mean absolute percentage error being 68.0% (Model I) and 65.7% (Model II) and Nash-Sutcliffe model efficiency being 0.55 (Model I) and 0.57 (Model II). Model III, which used the combination of daily rainfall amount and daily maximum 60-min rainfall, improved predictions significantly, and produced a Nash-Sutcliffe model efficiency for daily erosivity index prediction of 0.93. Thus daily rainfall data was generally sufficient for estimating annual average, yearly, and half-monthly time scales, while sub-daily data was needed when estimating daily erosivity values.

  5. Addressing global uncertainty and sensitivity in first-principles based microkinetic models by an adaptive sparse grid approach

    NASA Astrophysics Data System (ADS)

    Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian

    2018-01-01

    In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.

  6. Altitude Registration of Limb-Scattered Radiation

    NASA Technical Reports Server (NTRS)

    Moy, Leslie; Bhartia, Pawan K.; Jaross, Glen; Loughman, Robert; Kramarova, Natalya; Chen, Zhong; Taha, Ghassan; Chen, Grace; Xu, Philippe

    2017-01-01

    One of the largest constraints to the retrieval of accurate ozone profiles from UV backscatter limb sounding sensors is altitude registration. Two methods, the Rayleigh scattering attitude sensing (RSAS) and absolute radiance residual method (ARRM), are able to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors, but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique introduced in this paper, can be applied across all seasons and altitudes. However, it is only appropriate for relative altitude error estimates. The application of RSAS to Limb Profiler (LP) measurements from the Ozone Mapping and Profiler Suite (OMPS) on board the Suomi NPP (SNPP) satellite indicates tangent height (TH) errors greater than 1 km with an absolute accuracy of +/-200 m. Results using ARRM indicate a approx. 300 to 400m intra-orbital TH change varying seasonally +/-100 m, likely due to either errors in the spacecraft pointing or in the geopotential height (GPH) data that we use in our analysis. ARRM shows a change of approx. 200m over 5 years with a relative accuracy (a long-term accuracy) of 100m outside the polar regions.

  7. Absolute plate motions relative to deep mantle plumes

    NASA Astrophysics Data System (ADS)

    Wang, Shimin; Yu, Hongzheng; Zhang, Qiong; Zhao, Yonghong

    2018-05-01

    Advances in whole waveform seismic tomography have revealed the presence of broad mantle plumes rooted at the base of the Earth's mantle beneath major hotspots. Hotspot tracks associated with these deep mantle plumes provide ideal constraints for inverting absolute plate motions as well as testing the fixed hotspot hypothesis. In this paper, 27 observed hotspot trends associated with 24 deep mantle plumes are used together with the MORVEL model for relative plate motions to determine an absolute plate motion model, in terms of a maximum likelihood optimization for angular data fitting, combined with an outlier data detection procedure based on statistical tests. The obtained T25M model fits 25 observed trends of globally distributed hotspot tracks to the statistically required level, while the other two hotspot trend data (Comores on Somalia and Iceland on Eurasia) are identified as outliers, which are significantly incompatible with other data. For most hotspots with rate data available, T25M predicts plate velocities significantly lower than the observed rates of hotspot volcanic migration, which cannot be fully explained by biased errors in observed rate data. Instead, the apparent hotspot motions derived by subtracting the observed hotspot migration velocities from the T25M plate velocities exhibit a combined pattern of being opposite to plate velocities and moving towards mid-ocean ridges. The newly estimated net rotation of the lithosphere is statistically compatible with three recent estimates, but differs significantly from 30 of 33 prior estimates.

  8. Determining the effect of grain size and maximum induction upon coercive field of electrical steels

    NASA Astrophysics Data System (ADS)

    Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel

    2011-10-01

    Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.

  9. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  10. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    PubMed Central

    Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou

    2013-01-01

    Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491

  11. Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Jeff D.; Wong, Raimond; Swaminath, Anand

    Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less

  12. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

    NASA Astrophysics Data System (ADS)

    Amoush, Ahmad

    The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm, and 4cm, 5cm, 6cm, and 7cm, respectively. The Nucletron Freiburg flap applicator is used with the Nucletron remote afterloader HDR machine to deliver dose to surface cancers. Dosimetric data for the Nucletron 192Ir source were generated using Monte Carlo simulation and compared with the published data. Two dimensional dosimetric data were calculated at two source positions; at the center of the sphere of the applicator and between two adjacent spheres. Unlike the TPS dose algorithm, The Monte Carlo code developed for this research accounts for the applicator material, secondary electrons and delta particles, and the air gap between the skin and the applicator. *Standard Imaging, Inc., Middleton, Wisconsin USA † OneDose MOSFET, Sicel Technologies, Morrisville NC ‡ Los Alamos National Laboratory, NM USA

  13. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  14. Modeling and forecasting of KLCI weekly return using WT-ANN integrated model

    NASA Astrophysics Data System (ADS)

    Liew, Wei-Thong; Liong, Choong-Yeun; Hussain, Saiful Izzuan; Isa, Zaidi

    2013-04-01

    The forecasting of weekly return is one of the most challenging tasks in investment since the time series are volatile and non-stationary. In this study, an integrated model of wavelet transform and artificial neural network, WT-ANN is studied for modeling and forecasting of KLCI weekly return. First, the WT is applied to decompose the weekly return time series in order to eliminate noise. Then, a mathematical model of the time series is constructed using the ANN. The performance of the suggested model will be evaluated by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE). The result shows that the WT-ANN model can be considered as a feasible and powerful model for time series modeling and prediction.

  15. [Application of wavelet neural networks model to forecast incidence of syphilis].

    PubMed

    Zhou, Xian-Feng; Feng, Zi-Jian; Yang, Wei-Zhong; Li, Xiao-Song

    2011-07-01

    To apply Wavelet Neural Networks (WNN) model to forecast incidence of Syphilis. Back Propagation Neural Network (BPNN) and WNN were developed based on the monthly incidence of Syphilis in Sichuan province from 2004 to 2008. The accuracy of forecast was compared between the two models. In the training approximation, the mean absolute error (MAE), rooted mean square error (RMSE) and mean absolute percentage error (MAPE) were 0.0719, 0.0862 and 11.52% respectively for WNN, and 0.0892, 0.1183 and 14.87% respectively for BPNN. The three indexes for generalization of models were 0.0497, 0.0513 and 4.60% for WNN, and 0.0816, 0.1119 and 7.25% for BPNN. WNN is a better model for short-term forecasting of Syphilis.

  16. Feasibility study on the least square method for fitting non-Gaussian noise data

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Chen, Wen; Liang, Yingjie

    2018-02-01

    This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

  17. A minimax technique for time-domain design of preset digital equalizers using linear programming

    NASA Technical Reports Server (NTRS)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  18. Precise analytic approximations for the Bessel function J1 (x)

    NASA Astrophysics Data System (ADS)

    Maass, Fernando; Martin, Pablo

    2018-03-01

    Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.

  19. SU-E-T-365: Dosimetric Impact of Dental Amalgam CT Image Artifacts On IMRT and VMAT Head and Neck Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, N; Young, L; Parvathaneni, U

    Purpose: The presence of high density dental amalgam in patient CT image data sets causes dose calculation errors for head and neck (HN) treatment planning. This study assesses and compares dosimetric variations in IMRT and VMAT treatment plans due to dental artifacts. Methods: Sixteen HN patients with similar treatment sites (oropharynx), tumor volume and extensive dental artifacts were divided into two groups: IMRT (n=8, 6 to 9 beams) and VMAT (n=8, 2 arcs with 352° rotation). All cases were planned with the Pinnacle 9.2 treatment planning software using the collapsed cone convolution superposition algorithm and a range of prescription dosemore » from 60 to 72Gy. Two different treatment plans were produced, each based on one of two image sets: (a)uncorrected; (b)dental artifacts density overridden (set to 1.0g/cm{sup 3}). Differences between the two treatment plans for each of the IMRT and VMAT techniques were quantified by the following dosimetric parameters: maximum point dose, maximum spinal cord and brainstem dose, mean left and right parotid dose, and PTV coverage (V95%Rx). Average differences generated for these dosimetric parameters were compared between IMRT and VMAT plans. Results: The average absolute dose differences (plan a minus plan b) for the VMAT and IMRT techniques, respectively, caused by dental artifacts were: 2.2±3.3cGy vs. 37.6±57.5cGy (maximum point dose, P=0.15); 1.2±0.9cGy vs. 7.9±6.7cGy (maximum spinal cord dose, P=0.026); 2.2±2.4cGy vs. 12.1±13.0cGy (maximum brainstem dose, P=0.077); 0.9±1.1cGy vs. 4.1±3.5cGy (mean left parotid dose, P=0.038); 0.9±0.8cGy vs. 7.8±11.9cGy (mean right parotid dose, P=0.136); 0.021%±0.014% vs. 0.803%±1.44% (PTV coverage, P=0.17). Conclusion: For the HN plans studied, dental artifacts demonstrated a greater dose calculation error for IMRT plans compared to VMAT plans. Rotational arcs appear on the average to compensate dose calculation errors induced by dental artifacts. Thus, compared to VMAT, density overrides for dental artifacts are more important when planning IMRT of HN.« less

  20. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  1. Scenario-Based Validation of Moderate Resolution DEMs Freely Available for Complex Himalayan Terrain

    NASA Astrophysics Data System (ADS)

    Singh, Mritunjay Kumar; Gupta, R. D.; Snehmani; Bhardwaj, Anshuman; Ganju, Ashwagosha

    2016-02-01

    Accuracy of the Digital Elevation Model (DEM) affects the accuracy of various geoscience and environmental modelling results. This study evaluates accuracies of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM Version-2 (GDEM V2), the Shuttle Radar Topography Mission (SRTM) X-band DEM and the NRSC Cartosat-1 DEM V1 (CartoDEM). A high resolution (1 m) photogrammetric DEM (ADS80 DEM), having a high absolute accuracy [1.60 m linear error at 90 % confidence (LE90)], resampled at 30 m cell size was used as reference. The overall root mean square error (RMSE) in vertical accuracy was 23, 73, and 166 m and the LE90 was 36, 75, and 256 m for ASTER GDEM V2, SRTM X-band DEM and CartoDEM, respectively. A detailed error analysis was performed for individual as well as combinations of different classes of aspect, slope, land-cover and elevation zones for the study area. For the ASTER GDEM V2, forest areas with North facing slopes (0°-5°) in the 4th elevation zone (3773-4369 m) showed minimum LE90 of 0.99 m, and barren with East facing slopes (>60°) falling under the 2nd elevation zone (2581-3177 m) showed maximum LE90 of 166 m. For the SRTM DEM, pixels with South-East facing slopes of 0°-5° in the 4th elevation zone covered with forest showed least LE90 of 0.33 m and maximum LE90 of 521 m was observed in the barren area with North-East facing slope (>60°) in the 4th elevation zone. In case of the CartoDEM, the snow pixels in the 2nd elevation zone with South-East facing slopes of 5°-15° showed least LE90 of 0.71 m and maximum LE90 of 1266 m was observed for the snow pixels in the 3rd elevation zone (3177-3773 m) within the South facing slope of 45°-60°. These results can be highly useful for the researchers using DEM products in various modelling exercises.

  2. First Impressions of CARTOSAT-1

    NASA Technical Reports Server (NTRS)

    Lutes, James

    2007-01-01

    CARTOSAT-1 RPCs need special handling. Absolute accuracy of uncontrolled scenes is poor (biases > 300 m). Noticeable cross-track scale error (+/- 3-4 m across stereo pair). Most errors are either biases or linear in line/sample (These are easier to correct with ground control).

  3. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    NASA Astrophysics Data System (ADS)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  4. A nonlinear model of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  5. VizieR Online Data Catalog: Visible colors of Centaurs and KBOs (Peixinho+, 2015)

    NASA Astrophysics Data System (ADS)

    Peixinho, N.; Delsanti, A.; Doressoundiram, A.

    2015-02-01

    Table 2: significant Spearman-rho correlations detected between all colors and all orbital parameters of Centaurs, scattered disk objects, scattered or detached objects, Plutinos, other resonants, classical KBOs, binary or multiple KBOs, KBOs (without Haumea family and retrograde orbits), all objects (also without Haumea family and retrograde orbits), and KBOs except classical KBOs (also without Haumea family and retrograde orbits). First and second columns indicate the variables, third column the number of objects with both variables measured, forth column indicate the correlation value and its 68.2% error interval, fifth column indicates the p-value of the correlation, sixth column indicate the equivalent confidence level of the p-value in Gaussian sigmas, columns seven to nine indicate the detail of the False Discovery Correction for confidence levels of 2.5σ and 3σ (see Sect. 3.4), tenth column indicates the maximum detectable rho at a 2.5σ confidence level with a 10% risk of missing it, eleventh column indicates the maximum detectable rho at a 3σ confidence level with a 10% risk of missing it (see Sect. 3.2) Table 5: Compilation of R-band absolute magnitude, not corrected for the phase-angle, of Spectral gradient, B-V, V-R, R-I, V-I, B-I, B-R, and corresponding orbital and orbital related parameters of 366 Centaurs and KBOs. For each object/observation, we computed the reflectance spectrum using equation (3) from Delsanti et al. (2001A&A...380..347D), when 2 or more filters were available. The resulting spectra were manually checked, and obviously deviant data from a given filter were removed from the dataset. Color indexes are computed within one given epoch, leading to colors obtained from "simultaneous" photometry (the different bands were observed over a maximum timespan of 2 hours). Then the average colors indexes and their one σ errors from different papers and epochs are computed for each object using equations (1) and (2) from Hainaut and Delsanti (2002A&A...389..641H), providing more accurate estimates when multiple measurements are available. Absolute magnitude in R band (HR) are computed for each object/epoch whenever a R-band magnitude is available, using: HR=R-5log(rΔ), where R is the R-band magnitude, r and Δ the helio- and geocentric distances at the time of observations, respectively.Different values for a given object were also averaged using the aforementioned equations (1) and (2). We did not correct for any phase effect. (5 data files).

  6. Absolute Parameters for the F-type Eclipsing Binary BW Aquarii

    NASA Astrophysics Data System (ADS)

    Maxted, P. F. L.

    2018-05-01

    BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The absolute parameters of this binary (masses, radii, etc.) are known to good precision so they are often used to test stellar models, particularly in studies of convective overshooting. ... Maxted & Hutcheon (2018) analysed the Kepler K2 data for BW Aqr and noted that it shows variability between the eclipses that may be caused by tidally induced pulsations. ... Table 1 shows the absolute parameters for BW Aqr derived from an improved analysis of the Kepler K2 light curve plus the RV measurements from both Imbert (1979) and Lester & Gies (2018). ... The values in Table 1 with their robust error estimates from the standard deviation of the mean are consistent with the values and errors from Maxted & Hutcheon (2018) based on the PPD calculated using emcee for a fit to the entire K2 light curve.

  7. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  8. Altitude registration of limb-scattered radiation

    NASA Astrophysics Data System (ADS)

    Moy, Leslie; Bhartia, Pawan K.; Jaross, Glen; Loughman, Robert; Kramarova, Natalya; Chen, Zhong; Taha, Ghassan; Chen, Grace; Xu, Philippe

    2017-01-01

    One of the largest constraints to the retrieval of accurate ozone profiles from UV backscatter limb sounding sensors is altitude registration. Two methods, the Rayleigh scattering attitude sensing (RSAS) and absolute radiance residual method (ARRM), are able to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors, but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique introduced in this paper, can be applied across all seasons and altitudes. However, it is only appropriate for relative altitude error estimates. The application of RSAS to Limb Profiler (LP) measurements from the Ozone Mapping and Profiler Suite (OMPS) on board the Suomi NPP (SNPP) satellite indicates tangent height (TH) errors greater than 1 km with an absolute accuracy of ±200 m. Results using ARRM indicate a ˜ 300 to 400 m intra-orbital TH change varying seasonally ±100 m, likely due to either errors in the spacecraft pointing or in the geopotential height (GPH) data that we use in our analysis. ARRM shows a change of ˜ 200 m over ˜ 5 years with a relative accuracy (a long-term accuracy) of ±100 m outside the polar regions.

  9. 44 CFR 67.6 - Basis of appeal.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... absolute (except where mathematical or measurement error or changed physical conditions can be demonstrated... a mathematical or measurement error or changed physical conditions, then the specific source of the... registered professional engineer or licensed land surveyor, of the new data necessary for FEMA to conduct a...

  10. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  11. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  12. Spacecraft methods and structures with enhanced attitude control that facilitates gyroscope substitutions

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)

    2004-01-01

    Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.

  13. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    PubMed

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  14. Radiometric properties of the NS001 Thematic Mapper Simulator aircraft multispectral scanner

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Ahmad, Suraiya P.

    1990-01-01

    Laboratory tests of the NS001 TM are described emphasizing absolute calibration to determine the radiometry of the simulator's reflective channels. In-flight calibration of the data is accomplished with the NS001 internal integrating-sphere source because instabilities in the source can limit the absolute calibration. The data from 1987-89 indicate uncertainties of up to 25 percent with an apparent average uncertainty of about 15 percent. Also identified are dark current drift and sensitivity changes along the scan line, random noise, and nonlinearity which contribute errors of 1-2 percent. Uncertainties similar to hysteresis are also noted especially in the 2.08-2.35-micron range which can reduce sensitivity and cause errors. The NS001 TM Simulator demonstrates a polarization sensitivity that can generate errors of up to about 10 percent depending on the wavelength.

  15. Analysis of force profile during a maximum voluntary isometric contraction task.

    PubMed

    Househam, Elizabeth; McAuley, John; Charles, Thompson; Lightfoot, Timothy; Swash, Michael

    2004-03-01

    This study analyses maximum voluntary isometric contraction (MVIC) and its measurement by recording the force profile during maximal-effort, 7-s hand-grip contractions. Six healthy subjects each performed three trials repeated at short intervals to study variation from fatigue. These three trials were performed during three separate sessions at daily intervals to look at random variation. A pattern of force development during a trial was identified. An initiation phase, with or without an initiation peak, was followed by a maintenance phase, sometimes with secondary pulses and an underlying decline in force. Of these three MVIC parameters, maximum force during the maintenance phase showed less random variability compared to intertrial fatigue variability than did maximum force during the initiation phase or absolute maximum force. Analysis of MVIC as a task, rather than a single, maximal value reveals deeper levels of motor control in its generation. Thus, force parameters other than the absolute maximum force may be better suited to quantification of muscle performance in health and disease.

  16. Quantitative endoscopy: initial accuracy measurements.

    PubMed

    Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P

    2000-02-01

    The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.

  17. Estimation of Surface Air Temperature Over Central and Eastern Eurasia from MODIS Land Surface Temperature

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.

    2011-01-01

    Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.

  18. Influence of non-level walking on pedometer accuracy.

    PubMed

    Leicht, Anthony S; Crowther, Robert G

    2009-05-01

    The YAMAX Digiwalker pedometer has been previously confirmed as a valid and reliable monitor during level walking, however, little is known about its accuracy during non-level walking activities or between genders. Subsequently, this study examined the influence of non-level walking and gender on pedometer accuracy. Forty-six healthy adults completed 3-min bouts of treadmill walking at their normal walking pace during 11 inclines (0-10%) while another 123 healthy adults completed walking up and down 47 stairs. During walking, participants wore a YAMAX Digiwalker SW-700 pedometer with the number of steps taken and registered by the pedometer recorded. Pedometer difference (steps registered-steps taken), net error (% of steps taken), absolute error (absolute % of steps taken) and gender were examined by repeated measures two-way ANOVA and Tukey's post hoc tests. During incline walking, pedometer accuracy indices were similar between inclines and gender except for a significantly greater step difference (-7+/-5 steps vs. 1+/-4 steps) and net error (-2.4+/-1.8% for 9% vs. 0.4+/-1.2% for 2%). Step difference and net error were significantly greater during stair descent compared to stair ascent while absolute error was significantly greater during stair ascent compared to stair descent. The current study demonstrated that the YAMAX Digiwalker SW-700 pedometer exhibited good accuracy during incline walking up to 10% while it overestimated steps taken during stair ascent/descent with greater overestimation during stair descent. Stair walking activity should be documented in field studies as the YAMAX Digiwalker SW-700 pedometer overestimates this activity type.

  19. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  20. Patient-specific cardiac phantom for clinical training and preprocedure surgical planning.

    PubMed

    Laing, Justin; Moore, John; Vassallo, Reid; Bainbridge, Daniel; Drangova, Maria; Peters, Terry

    2018-04-01

    Minimally invasive mitral valve repair procedures including MitraClip ® are becoming increasingly common. For cases of complex or diseased anatomy, clinicians may benefit from using a patient-specific cardiac phantom for training, surgical planning, and the validation of devices or techniques. An imaging compatible cardiac phantom was developed to simulate a MitraClip ® procedure. The phantom contained a patient-specific cardiac model manufactured using tissue mimicking materials. To evaluate accuracy, the patient-specific model was imaged using computed tomography (CT), segmented, and the resulting point cloud dataset was compared using absolute distance to the original patient data. The result, when comparing the molded model point cloud to the original dataset, resulted in a maximum Euclidean distance error of 7.7 mm, an average error of 0.98 mm, and a standard deviation of 0.91 mm. The phantom was validated using a MitraClip ® device to ensure anatomical features and tools are identifiable under image guidance. Patient-specific cardiac phantoms may allow for surgical complications to be accounted for preoperative planning. The information gained by clinicians involved in planning and performing the procedure should lead to shorter procedural times and better outcomes for patients.

  1. Lyman alpha SMM/UVSP absolute calibration and geocoronal correction

    NASA Technical Reports Server (NTRS)

    Fontenla, Juan M.; Reichmann, Edwin J.

    1987-01-01

    Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.

  2. Positioning accuracy during VMAT of gynecologic malignancies and the resulting dosimetric impact by a 6-degree-of-freedom couch in combination with daily kilovoltage cone beam computed tomography.

    PubMed

    Yao, Lihong; Zhu, Lihong; Wang, Junjie; Liu, Lu; Zhou, Shun; Jiang, ShuKun; Cao, Qianqian; Qu, Ang; Tian, Suqing

    2015-04-26

    To improve the delivery of radiotherapy in gynecologic malignancies and to minimize the irradiation of unaffected tissues by using daily kilovoltage cone beam computed tomography (kV-CBCT) to reduce setup errors. Thirteen patients with gynecologic cancers were treated with postoperative volumetric-modulated arc therapy (VMAT). All patients had a planning CT scan and daily CBCT during treatment. Automatic bone anatomy matching was used to determine initial inter-fraction positioning error. Positional correction on a six-degrees-of-freedom (6DoF) couch was followed by a second scan to calculate the residual inter-fraction error, and a post-treatment scan assessed intra-fraction motion. The margins of the planning target volume (MPTV) were calculated from these setup variations and the effect of margin size on normal tissue sparing was evaluated. In total, 573 CBCT scans were acquired. Mean absolute pre-/post-correction errors were obtained in all six planes. With 6DoF couch correction, the MPTV accounting for intra-fraction errors was reduced by 3.8-5.6 mm. This permitted a reduction in the maximum dose to the small intestine, bladder and femoral head (P=0.001, 0.035 and 0.032, respectively), the average dose to the rectum, small intestine, bladder and pelvic marrow (P=0.003, 0.000, 0.001 and 0.000, respectively) and markedly reduced irradiated normal tissue volumes. A 6DoF couch in combination with daily kV-CBCT can considerably improve positioning accuracy during VMAT treatment in gynecologic malignancies, reducing the MPTV. The reduced margin size permits improved normal tissue sparing and a smaller total irradiated volume.

  3. Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2014-01-01

    An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.

  4. Using A New Model for Main Sequence Turnoff Absolute Magnitudes to Measure Stellar Streams in the Milky Way Halo

    NASA Astrophysics Data System (ADS)

    Weiss, Jake; Newberg, Heidi Jo; Arsenault, Matthew; Bechtel, Torrin; Desell, Travis; Newby, Matthew; Thompson, Jeffery M.

    2016-01-01

    Statistical photometric parallax is a method for using the distribution of absolute magnitudes of stellar tracers to statistically recover the underlying density distribution of these tracers. In previous work, statistical photometric parallax was used to trace the Sagittarius Dwarf tidal stream, the so-called bifurcated piece of the Sagittaritus stream, and the Virgo Overdensity through the Milky Way. We use an improved knowledge of this distribution in a new algorithm that accounts for the changes in the stellar population of color-selected stars near the photometric limit of the Sloan Digital Sky Survey (SDSS). Although we select bluer main sequence turnoff stars (MSTO) as tracers, large color errors near the survey limit cause many stars to be scattered out of our selection box and many fainter, redder stars to be scattered into our selection box. We show that we are able to recover parameters for analogues of these streams in simulated data using a maximum likelihood optimization on MilkyWay@home. We also present the preliminary results of fitting the density distribution of major Milky Way tidal streams in SDSS data. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.

  5. 3D prostate MR-TRUS non-rigid registration using dual optimization with volume-preserving constraint

    NASA Astrophysics Data System (ADS)

    Qiu, Wu; Yuan, Jing; Fenster, Aaron

    2016-03-01

    We introduce an efficient and novel convex optimization-based approach to the challenging non-rigid registration of 3D prostate magnetic resonance (MR) and transrectal ultrasound (TRUS) images, which incorporates a new volume preserving constraint to essentially improve the accuracy of targeting suspicious regions during the 3D TRUS guided prostate biopsy. Especially, we propose a fast sequential convex optimization scheme to efficiently minimize the employed highly nonlinear image fidelity function using the robust multi-channel modality independent neighborhood descriptor (MIND) across the two modalities of MR and TRUS. The registration accuracy was evaluated using 10 patient images by calculating the target registration error (TRE) using manually identified corresponding intrinsic fiducials in the whole prostate gland. We also compared the MR and TRUS manually segmented prostate surfaces in the registered images in terms of the Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). Experimental results showed that the proposed method with the introduced volume-preserving prior significantly improves the registration accuracy comparing to the method without the volume-preserving constraint, by yielding an overall mean TRE of 2:0+/-0:7 mm, and an average DSC of 86:5+/-3:5%, MAD of 1:4+/-0:6 mm and MAXD of 6:5+/-3:5 mm.

  6. Tropospheric profiles of wet refractivity and humidity from the combination of remote sensing data sets and measurements on the ground

    NASA Astrophysics Data System (ADS)

    Hurter, F.; Maier, O.

    2013-11-01

    We reconstruct atmospheric wet refractivity profiles for the western part of Switzerland with a least-squares collocation approach from data sets of (a) zenith path delays that are a byproduct of the GPS (global positioning system) processing, (b) ground meteorological measurements, (c) wet refractivity profiles from radio occultations whose tangent points lie within the study area, and (d) radiosonde measurements. Wet refractivity is a parameter partly describing the propagation of electromagnetic waves and depends on the atmospheric parameters temperature and water vapour pressure. In addition, we have measurements of a lower V-band microwave radiometer at Payerne. It delivers temperature profiles at high temporal resolution, especially in the range from ground to 3000 m a.g.l., though vertical information content decreases with height. The temperature profiles together with the collocated wet refractivity profiles provide near-continuous dew point temperature or relative humidity profiles at Payerne for the study period from 2009 to 2011. In the validation of the humidity profiles, we adopt a two-step procedure. We first investigate the reconstruction quality of the wet refractivity profiles at the location of Payerne by comparing them to wet refractivity profiles computed from radiosonde profiles available for that location. We also assess the individual contributions of the data sets to the reconstruction quality and demonstrate a clear benefit from the data combination. Secondly, the accuracy of the conversion from wet refractivity to dew point temperature and relative humidity profiles with the radiometer temperature profiles is examined, comparing them also to radiosonde profiles. For the least-squares collocation solution combining GPS and ground meteorological measurements, we achieve the following error figures with respect to the radiosonde reference: maximum median offset of relative refractivity error is -16% and quartiles are 5% to 40% for the lower troposphere. We further added 189 radio occultations that met our requirements. They mostly improved the accuracy in the upper troposphere. Maximum median offsets have decreased from 120% relative error to 44% at 8 km height. Dew point temperature profiles after the conversion with radiometer temperatures compare to radiosonde profiles as to: absolute dew point temperature errors in the lower troposphere have a maximum median offset of -2 K and maximum quartiles of 4.5 K. For relative humidity, we get a maximum mean offset of 7.3%, with standard deviations of 12-20%. The methodology presented allows us to reconstruct humidity profiles at any location where temperature profiles, but no atmospheric humidity measurements other than from GPS are available. Additional data sets of wet refractivity are shown to be easily integrated into the framework and strongly aid the reconstruction. Since the used data sets are all operational and available in near-realtime, we envisage the methodology of this paper to be a tool for nowcasting of clouds and rain and to understand processes in the boundary layer and at its top.

  7. Influence of Distributed Dead Loads on Vehicle Position for Maximum Moment in Simply Supported Bridges

    NASA Astrophysics Data System (ADS)

    Gupta, Tanmay; Kumar, Manoj

    2017-06-01

    Usually, the design moments in the simply supported bridges are obtained as the sum of moments due to dead loads and live load where the live load moments are calculated using the rolling load concept neglecting the effect of dead loads. For the simply supported bridges, uniformly distributed dead load produces maximum moment at mid-span while the absolute maximum bending moment due to multi-axel vehicles occur under a wheel which usually do not lie at mid-span. Since, the location of absolute maximum bending moment due to multi-axel vehicle do not coincide with the location of maximum moment due to dead loads occurring at mid-span, the design moment may not be obtained by simply superimposing the effect of dead load and live load. Moreover, in case of Class-A and Class-70R wheeled vehicular live loads, which consists of several axels, the number of axels to be considered over the bridge of given span and their location is tedious to find out and needs several trials. The aim of the present study is to find the number of wheels for Class-A and Class-70R wheeled vehicles and their precise location to produce absolute maximum moment in the bridge considering the effect of dead loads and impact factor. Finally, in order to enable the designers, the design moments due to Class-70R wheeled and Class-A loading have been presented in tabular form for the spans from 10 to 50 m.

  8. Unique strain history during ejection in canine left ventricle.

    PubMed

    Douglas, A S; Rodriguez, E K; O'Dell, W; Hunter, W C

    1991-05-01

    Understanding the relationship between structure and function in the heart requires a knowledge of the connection between the local behavior of the myocardium (e.g., shortening) and the pumping action of the left ventricle. We asked the question, how do changes in preload and afterload affect the relationship between local myocardial deformation and ventricular volume? To study this, a set of small radiopaque beads was implanted in approximately 1 cm3 of the isolated canine heart left ventricular free wall. Using biplane cineradiography, we tracked the motion of these markers through various cardiac cycles (controlling pre- and afterload) using the relative motion of six markers to quantify the local three dimensional Lagrangian strain. Two different reference states (used to define the strains) were considered. First, we used the configuration of the heart at end diastole for that particular cardiac cycle to define the individual strains (which gave the local "shortening fraction") and the ejection fraction. Second, we used a single reference state for all cardiac cycles i.e., the end-diastolic state at maximum volume, to define absolute strains (which gave local fractional length) and the volume fraction. The individual strain versus ejection fraction trajectories were dependent on preload and afterload. For any one heart, however, each component of absolute strain was more tightly correlated to volume fraction. Around each linear regression, the individual measurements of absolute strain scattered with standard errors that averaged less than 7% of their range. Thus the canine hearts examined had a preferred kinematic (shape) history during ejection, different from the kinematics of filling and independent or pre-or afterload and of stroke volume.

  9. Prognostic value of metabolic metrics extracted from baseline PET images in NSCLC

    PubMed Central

    Carvalho, Sara; Leijenaar, Ralph T.H.; Velazquez, Emmanuel Rios; Oberije, Cary; Parmar, Chintan; van Elmpt, Wouter; Reymen, Bart; Troost, Esther G.C.; Oellers, Michel; Dekker, Andre; Gillies, Robert; Aerts, Hugo J.W.L.; Lambin, Philippe

    2015-01-01

    Background Maximum, mean and peak SUV of primary tumor at baseline FDG-PET scans, have often been found predictive for overall survival in non-small cell lung cancer (NSCLC) patients. In this study we further investigated the prognostic power of advanced metabolic metrics derived from Intensity-Volume Histograms (IVH) extracted from PET imaging. Methods A cohort of 220 NSCLC patients (mean age, 66.6 years; 149 men, 71 women), stages I-IIIB, treated with radiotherapy with curative intent were included (NCT00522639). Each patient underwent standardized pre-treatment CT-PET imaging. Primary GTV was delineated by an experienced radiation oncologist on CT-PET images. Common PET descriptors such as maximum, mean and peak SUV, and metabolic tumor volume (MTV) were quantified. Advanced descriptors of metabolic activity were quantified by IVH. These comprised 5 groups of features: Absolute and Relative Volume above Relative Intensity threshold (AVRI and RVRI), Absolute and Relative Volume above Absolute Intensity threshold (AVAI and RVAI), and Absolute Intensity above Relative Volume threshold (AIRV). MTV was derived from the IVH curves for volumes with SUV above 2.5, 3 and 4, and of 40% and 50% maximum SUV. Univariable analysis using Cox Proportional Hazard Regression was performed for overall survival assessment. Results Relative volume above higher SUV (80 %) was an independent predictor of OS (p = 0.05). None of the possible surrogates for MTV based on volumes above SUV of 3, 40% and 50% of maximum SUV showed significant associations with OS (p (AVAI3) = 0.10, p (AVAI4) = 0.22, p (AVRI40%) = 0.15, p (AVRI50%) = 0.17). Maximum and peak SUV (r = 0.99) revealed no prognostic value for OS (p (maximum SUV) = 0.20, p (peak SUV) = 0.22). Conclusions New methods using more advanced imaging features extracted from PET were analyzed. Best prognostic value for OS of NSCLC patients was found for relative portions of the tumor above higher uptakes (80% SUV). PMID:24047338

  10. Performance of a New Model for Predicting End of Flowering Date (bbch 69) of Grapevine (Vitis Vinifera L.)

    NASA Astrophysics Data System (ADS)

    Gentilucci, Matteo

    2017-04-01

    The end of flowering date (BBCH 69) is an important phenological stage for grapevine (Vitis Vinifera L.), in fact up to this date the growth is focused on the plant and gradually passes on the berries through fruit set. The aim of this study is to perform a model to predict the date of the end of flowering (BBCH69) for some grapevine varieties. This research carried out using three cultivars of grapevine (Maceratino, Montepulciano, Sangiovese) in three different locations (Macerata, Morrovalle and Potenza Picena), places of an equal number of wine farms for the time interval between 2006 and 2013. In order to have reliable temperatures for each location, the data of 6 weather stations near these farms have been interpolated using cokriging methods with elevation as independent variable. The procedure to predict the end of flowering date starts with an investigation of cardinal temperatures typical of each grapevine cultivar. In fact the analysis is characterized by four temperature thresholds (cardinals): minimum activity temperature (TCmin = below this temperature there is no growth for the plant), lower optimal temperature (TLopt = above this temperature there is maximum growth), upper optimal temperature (TUopt = below this temperature there is maximum growth) and maximum activity temperature (TC max = above this temperature there is no growth). Thus this model take into consideration maximum, mean and minimum daily temperatures of each location, relating them with the four above mentioned cultivar temperature thresholds. In this way it has been obtained some possible cases (32) corresponding to as many equations, depending on the position of temperatures compared with the thresholds, in order to calculate the amount of growing degree units (GDU) for each day. Several iterative tests (about 1000 for each cultivar) have been performed, changing the values of temperature thresholds and GDU in order to find the best possible combination which minimizes error between observed and predicted days from budburst to end of flowering. It has been assessed the minimization of error for the predicted dates compared with real ones, calculating some statistical indexes as root mean square error, mean absolute error and coefficient of variation. The procedure led to the identification of four cardinal temperatures and the amount of GDU for each cultivar between BBCH01 (budburst) and BBCH69 (end of flowering). In conclusion, this research has achieved some goals such as the plant response to temperature (same cardinal temperatures for Maceratino and Sangiovese but higher ones for Montepulciano), the interval ranging of growing degree units (from 35 to 38) and the differences between observed and predicted days (ranged from 2 to 3.5), for each grape varieties.

  11. 40 CFR Table 4 to Subpart Ooo of... - Operating Parameter Levels

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... temperature Maximum temperature Carbon absorber Total regeneration steam or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum flow or pressure; and maximum...

  12. On the application of the Principal Component Analysis for an efficient climate downscaling of surface wind fields

    NASA Astrophysics Data System (ADS)

    Chavez, Roberto; Lozano, Sergio; Correia, Pedro; Sanz-Rodrigo, Javier; Probst, Oliver

    2013-04-01

    With the purpose of efficiently and reliably generating long-term wind resource maps for the wind energy industry, the application and verification of a statistical methodology for the climate downscaling of wind fields at surface level is presented in this work. This procedure is based on the combination of the Monte Carlo and the Principal Component Analysis (PCA) statistical methods. Firstly the Monte Carlo method is used to create a huge number of daily-based annual time series, so called climate representative years, by the stratified sampling of a 33-year-long time series corresponding to the available period of the NCAR/NCEP global reanalysis data set (R-2). Secondly the representative years are evaluated such that the best set is chosen according to its capability to recreate the Sea Level Pressure (SLP) temporal and spatial fields from the R-2 data set. The measure of this correspondence is based on the Euclidean distance between the Empirical Orthogonal Functions (EOF) spaces generated by the PCA (Principal Component Analysis) decomposition of the SLP fields from both the long-term and the representative year data sets. The methodology was verified by comparing the selected 365-days period against a 9-year period of wind fields generated by dynamical downscaling the Global Forecast System data with the mesoscale model SKIRON for the Iberian Peninsula. These results showed that, compared to the traditional method of dynamical downscaling any random 365-days period, the error in the average wind velocity by the PCA's representative year was reduced by almost 30%. Moreover the Mean Absolute Errors (MAE) in the monthly and daily wind profiles were also reduced by almost 25% along all SKIRON grid points. These results showed also that the methodology presented maximum error values in the wind speed mean of 0.8 m/s and maximum MAE in the monthly curves of 0.7 m/s. Besides the bulk numbers, this work shows the spatial distribution of the errors across the Iberian domain and additional wind statistics such as the velocity and directional frequency. Additional repetitions were performed to prove the reliability and robustness of this kind-of statistical-dynamical downscaling method.

  13. A Legendre tau-spectral method for solving time-fractional heat equation with nonlocal conditions.

    PubMed

    Bhrawy, A H; Alghamdi, M A

    2014-01-01

    We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem.

  14. A Legendre tau-Spectral Method for Solving Time-Fractional Heat Equation with Nonlocal Conditions

    PubMed Central

    Bhrawy, A. H.; Alghamdi, M. A.

    2014-01-01

    We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem. PMID:25057507

  15. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  16. Fourth-order numerical solutions of diffusion equation by using SOR method with Crank-Nicolson approach

    NASA Astrophysics Data System (ADS)

    Muhiddin, F. A.; Sulaiman, J.

    2017-09-01

    The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.

  17. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction.

    PubMed

    Wang, Jinke; Guo, Haoyan

    2016-01-01

    This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  18. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    PubMed

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  19. Low-Budget Instrumentation of a Conventional Leg Press to Measure Reliable Isometric-Strength Capacity.

    PubMed

    Baur, Heiner; Groppa, Alessia Severina; Limacher, Regula; Radlinger, Lorenz

    2016-02-02

    Maximum strength and rate of force development (RFD) are 2 important strength characteristics for everyday tasks and athletic performance. Measurements of both parameters must be reliable. Expensive isokinetic devices with isometric modes are often used. The possibility of cost-effective measurements in a practical setting would facilitate quality control. The purpose of this study was to assess the reliability of measurements of maximum isometric strength (Fmax) and RFD on a conventional leg press. Sixteen subjects (23 ± 2 y, 1.68 ± 0.05 m, 59 ± 5 kg) were tested twice within 1 session. After warm-up, subjects performed 2 times 5 trials eliciting maximum voluntary isometric contractions on an instrumented leg press (1- and 2-legged randomized). Fmax (N) and RFD (N/s) were extracted from force-time curves. Reliability was determined for Fmax and RFD by calculating the intraclass correlation coefficient (ICC), the test-retest variability (TRV), and the bias and limits of agreement. Reliability measures revealed good to excellent ICCs of .80-.93. TRV showed mean differences between measurement sessions of 0.4-6.9%. The systematic error was low compared with the absolute mean values (Fmax 5-6%, RFD 1-4%). The implementation of a force transducer into a conventional leg press provides a viable procedure to assess Fmax and RFD. Both performance parameters can be assessed with good to excellent reliability allowing quality control of interventions.

  20. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Dromgoole, L; Alvarez, P

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less

  1. Uncertainty analysis of thermocouple measurements used in normal and abnormal thermal environment experiments at Sandia's Radiant Heat Facility and Lurance Canyon Burn Site.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James Thomas

    2004-04-01

    It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less

  2. Comparison between artificial neural network and multilinear regression models in an evaluation of cognitive workload in a flight simulator.

    PubMed

    Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo

    2008-01-01

    In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.

  3. Data and performances of selected aircraft and rotorcraft

    NASA Astrophysics Data System (ADS)

    Filippone, Antonio

    2000-11-01

    The purpose of this article is to provide a synthetic and comparative view of selected aircraft and rotorcraft (nearly 300 of them) from past and present. We report geometric characteristics of wings (wing span, areas, aspect-ratios, sweep angles, dihedral/anhedral angles, thickness ratios at root and tips, taper ratios) and rotor blades (type of rotor, diameter, number of blades, solidity, rpm, tip Mach numbers); aerodynamic data (drag coefficients at zero lift, cruise and maximum absolute glide ratio); performances (wing and disk loadings, maximum absolute Mach number, cruise Mach number, service ceiling, rate of climb, centrifugal acceleration limits, maximum take-off weight, maximum payload, thrust-to-weight ratios). There are additional data on wing types, high-lift devices, noise levels at take-off and landing. The data are presented on tables for each aircraft class. A graphic analysis offers a comparative look at all types of data. Accuracy levels are provided wherever available.

  4. Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump

    NASA Astrophysics Data System (ADS)

    Gontcharov, G. A.

    2017-08-01

    Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.

  5. Flat Field Anomalies in an X-ray CCD Camera Measured Using a Manson X-ray Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. J. Haugh and M. B. Schneider

    2008-10-31

    The Static X-ray Imager (SXI) is a diagnostic used at the National Ignition Facility (NIF) to measure the position of the X-rays produced by lasers hitting a gold foil target. The intensity distribution taken by the SXI camera during a NIF shot is used to determine how accurately NIF can aim laser beams. This is critical to proper NIF operation. Imagers are located at the top and the bottom of the NIF target chamber. The CCD chip is an X-ray sensitive silicon sensor, with a large format array (2k x 2k), 24 μm square pixels, and 15 μm thick. Amore » multi-anode Manson X-ray source, operating up to 10kV and 10W, was used to characterize and calibrate the imagers. The output beam is heavily filtered to narrow the spectral beam width, giving a typical resolution E/ΔE≈10. The X-ray beam intensity was measured using an absolute photodiode that has accuracy better than 1% up to the Si K edge and better than 5% at higher energies. The X-ray beam provides full CCD illumination and is flat, within ±1% maximum to minimum. The spectral efficiency was measured at 10 energy bands ranging from 930 eV to 8470 eV. We observed an energy dependent pixel sensitivity variation that showed continuous change over a large portion of the CCD. The maximum sensitivity variation occurred at 8470 eV. The geometric pattern did not change at lower energies, but the maximum contrast decreased and was not observable below 4 keV. We were also able to observe debris, damage, and surface defects on the CCD chip. The Manson source is a powerful tool for characterizing the imaging errors of an X-ray CCD imager. These errors are quite different from those found in a visible CCD imager.« less

  6. Electron-impact ionization of silicon tetrachloride (SiCl4).

    PubMed

    Basner, R; Gutkin, M; Mahoney, J; Tarnovsky, V; Deutsch, H; Becker, K

    2005-08-01

    We measured absolute partial cross sections for the formation of various singly charged and doubly charged positive ions produced by electron impact on silicon tetrachloride (SiCl4) using two different experimental techniques, a time-of-flight mass spectrometer (TOF-MS) and a fast-neutral-beam apparatus. The energy range covered was from the threshold to 900 eV in the TOF-MS and to 200 eV in the fast-neutral-beam apparatus. The results obtained by the two different experimental techniques were found to agree very well (better than their combined margins of error). The SiCl3(+) fragment ion has the largest partial ionization cross section with a maximum value of slightly above 6x10(-20) m2 at about 100 eV. The cross sections for the formation of SiCl4(+), SiCl+, and Cl+ have maximum values around 4x10(-20) m2. Some of the cross-section curves exhibit an unusual energy dependence with a pronounced low-energy maximum at an energy around 30 eV followed by a broad second maximum at around 100 eV. This is similar to what has been observed by us earlier for another Cl-containing molecule, TiCl4 [R. Basner, M. Schmidt, V. Tamovsky, H. Deutsch, and K. Becker, Thin Solid Films 374 291 (2000)]. The maximum cross-section values for the formation of the doubly charged ions, with the exception of SiCl3(++), are 0.05x10(-20) m2 or less. The experimentally determined total single ionization cross section of SiCl4 is compared with the results of semiempirical calculations.

  7. SU-F-T-330: Characterization of the Clinically Released ScandiDos Discover Diode Array for In-Vivo Dose Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Gutierrez, A

    Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less

  8. Performance Study of Earth Networks Total Lightning Network using Rocket-Triggered Lightning Data in 2014

    NASA Astrophysics Data System (ADS)

    Heckman, S.

    2015-12-01

    Modern lightning locating systems (LLS) provide real-time monitoring and early warning of lightningactivities. In addition, LLS provide valuable data for statistical analysis in lightning research. It isimportant to know the performance of such LLS. In the present study, the performance of the EarthNetworks Total Lightning Network (ENTLN) is studied using rocket-triggered lightning data acquired atthe International Center for Lightning Research and Testing (ICLRT), Camp Blanding, Florida.In the present study, 18 flashes triggered at ICLRT in 2014 were analyzed and they comprise of 78negative cloud-to-ground return strokes. The geometric mean, median, minimum, and maximum for thepeak currents of the 78 return strokes are 13.4 kA, 13.6 kA, 3.7 kA, and 38.4 kA, respectively. The peakcurrents represent typical subsequent return strokes in natural cloud-to-ground lightning.Earth Networks has developed a new data processor to improve the performance of their network. Inthis study, results are presented for the ENTLN data using the old processor (originally reported in 2014)and the ENTLN data simulated using the new processor. The flash detection efficiency, stroke detectionefficiency, percentage of misclassification, median location error, median peak current estimation error,and median absolute peak current estimation error for the originally reported data from old processorare 100%, 94%, 49%, 271 m, 5%, and 13%, respectively, and those for the simulated data using the newprocessor are 100%, 99%, 9%, 280 m, 11%, and 15%, respectively. The use of new processor resulted inhigher stroke detection efficiency and lower percentage of misclassification. It is worth noting that theslight differences in median location error, median peak current estimation error, and median absolutepeak current estimation error for the two processors are due to the fact that the new processordetected more number of return strokes than the old processor.

  9. Seasonality and Trend Forecasting of Tuberculosis Prevalence Data in Eastern Cape, South Africa, Using a Hybrid Model.

    PubMed

    Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin

    2016-07-26

    Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.

  10. Engineering study comparing injection force and dose accuracy between two prefilled insulin injection pens.

    PubMed

    Ignaut, Debra A; Opincar, Michael R; Clark, Paula E; Palaisa, Melanie K; Lenox, Sheila M

    2009-12-01

    This study compared injection force (measured by glide force [GF] and glide force variability [GFV]) and dosing accuracy of the Humalog KwikPen * (prefilled insulin lispro [Humalog dagger] pen, Eli Lilly and Company, Indianapolis, IN) and the Next Generation FlexPen double dagger (prefilled insulin aspart [NovoRapid section sign] pen, Novo Nordisk A/S, Bagsvaerd, Denmark). * Humalog KwikPen is a registered trademark of Eli Lilly and Company, Indianapolis, IN, USA. dagger Humalog is a registered trademark of Eli Lilly and Company, Indianapolis, IN, USA. double dagger FlexPen is a registered trademark of Novo Nordisk A/S, Bagsvaerd, Denmark. section sign NovoRapid is a registered trademark of Novo Nordisk A/S, Bagsvaerd, Denmark. A total of 100 prefilled insulin pens (50 insulin lispro pens, 50 insulin aspart pens) were tested using two dose sizes (30 U and 60 U). In all, 50 devices (25 of each type) were tested at 10 U/s dosing speed and 50 were tested at 6.6 U/s. Devices were used per manufacturer instructions. Dose accuracy (represented as absolute dose error %), maximum and average GF, and GFV data were automatically collected by the test system for all datasets (dose size/dosing speed/device type). The test system controlled for potential dosing errors. The insulin lispro pen demonstrated a significantly lower median maximum GF at both dosing speeds: (2.83 vs. 3.92 lbs [30 U] and 3.00 vs. 4.14 lbs [60 U]) at 10 U/s; (1.85 vs. 2.93 lbs [30 U] and 2.14 vs. 3.02 lbs [60 U]) at 6.6 U/s, all p < 0.0001. For all datasets, the median GFV was significantly lower for the insulin lispro pen, p < 0.0001. Median dose error was comparable between device types when tested at 10 U/s dosing speed; however, at 6.6 U/s, the median dose error was significantly lower for insulin lispro pen compared to insulin aspart pen (0.47 vs. 0.67% [30 U] and 0.50 vs. 0.78% [60 U], both p < 0.05). The insulin lispro pen had significantly lower median GF and GFV compared with insulin aspart pen when tested at two dose sizes and two dosing speeds. Median dose error was similar between the device types at the 10 U/s dosing speed, but median dose error was significantly lower for the insulin lispro pen at the 6.6 U/s dosing speed. A limitation of this study was that it was executed as an open label study.

  11. Comparison of artificial intelligence techniques for prediction of soil temperatures in Turkey

    NASA Astrophysics Data System (ADS)

    Citakoglu, Hatice

    2017-10-01

    Soil temperature is a meteorological data directly affecting the formation and development of plants of all kinds. Soil temperatures are usually estimated with various models including the artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models. Soil temperatures along with other climate data are recorded by the Turkish State Meteorological Service (MGM) at specific locations all over Turkey. Soil temperatures are commonly measured at 5-, 10-, 20-, 50-, and 100-cm depths below the soil surface. In this study, the soil temperature data in monthly units measured at 261 stations in Turkey having records of at least 20 years were used to develop relevant models. Different input combinations were tested in the ANN and ANFIS models to estimate soil temperatures, and the best combination of significant explanatory variables turns out to be monthly minimum and maximum air temperatures, calendar month number, depth of soil, and monthly precipitation. Next, three standard error terms (mean absolute error (MAE, °C), root mean squared error (RMSE, °C), and determination coefficient ( R 2 )) were employed to check the reliability of the test data results obtained through the ANN, ANFIS, and MLR models. ANFIS (RMSE 1.99; MAE 1.09; R 2 0.98) is found to outperform both ANN and MLR (RMSE 5.80, 8.89; MAE 1.89, 2.36; R 2 0.93, 0.91) in estimating soil temperature in Turkey.

  12. Climatological Modeling of Monthly Air Temperature and Precipitation in Egypt through GIS Techniques

    NASA Astrophysics Data System (ADS)

    El Kenawy, A.

    2009-09-01

    This paper describes a method for modeling and mapping four climatic variables (maximum temperature, minimum temperature, mean temperature and total precipitation) in Egypt using a multiple regression approach implemented in a GIS environment. In this model, a set of variables including latitude, longitude, elevation within a distance of 5, 10 and 15 km, slope, aspect, distance to the Mediterranean Sea, distance to the Red Sea, distance to the Nile, ratio between land and water masses within a radius of 5, 10, 15 km, the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), the Normalized Difference Temperature Index (NDTI) and reflectance are included as independent variables. These variables were integrated as raster layers in MiraMon software at a spatial resolution of 1 km. Climatic variables were considered as dependent variables and averaged from quality controlled and homogenized 39 series distributing across the entire country during the period of (1957-2006). For each climatic variable, digital and objective maps were finally obtained using the multiple regression coefficients at monthly, seasonal and annual timescale. The accuracy of these maps were assessed through cross-validation between predicted and observed values using a set of statistics including coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean bias Error (MBE) and D Willmott statistic. These maps are valuable in the sense of spatial resolution as well as the number of observatories involved in the current analysis.

  13. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    PubMed

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  14. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  15. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  16. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  17. Metric Identification and Protocol Development for Characterizing DNAPL Source Zone Architecture and Associated Plume Response

    DTIC Science & Technology

    2013-09-01

    M.4.1. Two-dimensional domains cropped out of three-dimensional numerically generated realizations; (a) 3D PCE-NAPL realizations generated by UTCHEM...165 Figure R.3.2. The absolute error vs relative error scatter plots of pM and gM from SGS data set- 4 using multi-task manifold...error scatter plots of pM and gM from TP/MC data set using multi- task manifold regression

  18. Dependence of Aerosol Light Absorption and Single-Scattering Albedo On Ambient Relative Humidity for Sulfate Aerosols with Black Carbon Cores

    NASA Technical Reports Server (NTRS)

    Redemann, Jens; Russell, Philip B.; Hamill, Patrick

    2001-01-01

    Atmospheric aerosols frequently contain hygroscopic sulfate species and black carbon (soot) inclusions. In this paper we report results of a modeling study to determine the change in aerosol absorption due to increases in ambient relative humidity (RH), for three common sulfate species, assuming that the soot mass fraction is present as a single concentric core within each particle. Because of the lack of detailed knowledge about various input parameters to models describing internally mixed aerosol particle optics, we focus on results that were aimed at determining the maximum effect that particle humidification may have on aerosol light absorption. In the wavelength range from 450 to 750 nm, maximum absorption humidification factors (ratio of wet to 'dry=30% RH' absorption) for single aerosol particles are found to be as large as 1.75 when the RH changes from 30 to 99.5%. Upon lesser humidification from 30 to 80% RH, absorption humidification for single particles is only as much as 1.2, even for the most favorable combination of initial ('dry') soot mass fraction and particle size. Integrated over monomodal lognormal particle size distributions, maximum absorption humidification factors range between 1.07 and 1.15 for humidification from 30 to 80% and between 1.1 and 1.35 for humidification from 30 to 95% RH for all species considered. The largest humidification factors at a wavelength of 450 nm are obtained for 'dry' particle size distributions that peak at a radius of 0.05 microns, while the absorption humidification factors at 700 nm are largest for 'dry' size distributions that are dominated by particles in the radius range of 0.06 to 0.08 microns. Single-scattering albedo estimates at ambient conditions are often based on absorption measurements at low RH (approx. 30%) and the assumption that aerosol absorption does not change upon humidification (i.e., absorption humidification equal to unity). Our modeling study suggests that this assumption alone can introduce absolute errors in estimates of the midvisible single-scattering albedo of up to 0.05 for realistic dry particle size distributions. Our study also indicates that this error increases with increasing wavelength. The potential errors in aerosol single-scattering albedo derived here are comparable in magnitude and in addition to uncertainties in single-scattering albedo estimates that are based on measurements of aerosol light absorption and scattering.

  19. Error analysis on spinal motion measurement using skin mounted sensors.

    PubMed

    Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond

    2008-01-01

    Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.

  20. Anatomical frame identification and reconstruction for repeatable lower limb joint kinematics estimates.

    PubMed

    Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio

    2008-07-19

    The quantitative description of joint mechanics during movement requires the reconstruction of the position and orientation of selected anatomical axes with respect to a laboratory reference frame. These anatomical axes are identified through an ad hoc anatomical calibration procedure and their position and orientation are reconstructed relative to bone-embedded frames normally derived from photogrammetric marker positions and used to describe movement. The repeatability of anatomical calibration, both within and between subjects, is crucial for kinematic and kinetic end results. This paper illustrates an anatomical calibration approach, which does not require anatomical landmark manual palpation, described in the literature to be prone to great indeterminacy. This approach allows for the estimate of subject-specific bone morphology and automatic anatomical frame identification. The experimental procedure consists of digitization through photogrammetry of superficial points selected over the areas of the bone covered with a thin layer of soft tissue. Information concerning the location of internal anatomical landmarks, such as a joint center obtained using a functional approach, may also be added. The data thus acquired are matched with the digital model of a deformable template bone. Consequently, the repeatability of pelvis, knee and hip joint angles is determined. Five volunteers, each of whom performed five walking trials, and six operators, with no specific knowledge of anatomy, participated in the study. Descriptive statistics analysis was performed during upright posture, showing a limited dispersion of all angles (less than 3 deg) except for hip and knee internal-external rotation (6 deg and 9 deg, respectively). During level walking, the ratio of inter-operator and inter-trial error and an absolute subject-specific repeatability were assessed. For pelvic and hip angles, and knee flexion-extension the inter-operator error was equal to the inter-trial error-the absolute error ranging from 0.1 deg to 0.9 deg. Knee internal-external rotation and ab-adduction showed, on average, inter-operator errors, which were 8% and 28% greater than the relevant inter-trial errors, respectively. The absolute error was in the range 0.9-2.9 deg.

  1. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    PubMed

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  2. Automatic pedicles detection using convolutional neural network in a 3D spine reconstruction from biplanar radiographs

    NASA Astrophysics Data System (ADS)

    Bakhous, Christine; Aubert, Benjamin; Vazquez, Carlos; Cresson, Thierry; Parent, Stefan; De Guise, Jacques

    2018-02-01

    The 3D analysis of the spine deformities (scoliosis) has a high potential in its clinical diagnosis and treatment. In a biplanar radiographs context, a 3D analysis requires a 3D reconstruction from a pair of 2D X-rays. Whether being fully-/semiautomatic or manual, this task is complex because of the noise, the structure superimposition and partial information due to a limited projections number. Being involved in the axial vertebra rotation (AVR), which is a fundamental clinical parameter for scoliosis diagnosis, pedicles are important landmarks for the 3D spine modeling and pre-operative planning. In this paper, we focus on the extension of a fully-automatic 3D spine reconstruction method where the Vertebral Body Centers (VBCs) are automatically detected using Convolutional Neural Network (CNN) and then regularized using a Statistical Shape Model (SSM) framework. In this global process, pedicles are inferred statistically during the SSM regularization. Our contribution is to add a CNN-based regression model for pedicle detection allowing a better pedicle localization and improving the clinical parameters estimation (e.g. AVR, Cobb angle). Having 476 datasets including healthy patients and Adolescent Idiopathic Scoliosis (AIS) cases with different scoliosis grades (Cobb angles up to 116°), we used 380 for training, 48 for testing and 48 for validation. Adding the local CNN-based pedicle detection decreases the mean absolute error of the AVR by 10%. The 3D mean Euclidian distance error between detected pedicles and ground truth decreases by 17% and the maximum error by 19%. Moreover, a general improvement is observed in the 3D spine reconstruction and reflected in lower errors on the Cobb angle estimation.

  3. Poster Presentation: Optical Test of NGST Developmental Mirrors

    NASA Technical Reports Server (NTRS)

    Hadaway, James B.; Geary, Joseph; Reardon, Patrick; Peters, Bruce; Keidel, John; Chavers, Greg

    2000-01-01

    An Optical Testing System (OTS) has been developed to measure the figure and radius of curvature of NGST developmental mirrors in the vacuum, cryogenic environment of the X-Ray Calibration Facility (XRCF) at Marshall Space Flight Center (MSFC). The OTS consists of a WaveScope Shack-Hartmann sensor from Adaptive Optics Associates as the main instrument, a Point Diffraction Interferometer (PDI), a Point Spread Function (PSF) imager, an alignment system, a Leica Disto Pro distance measurement instrument, and a laser source palette (632.8 nm wavelength) that is fiber-coupled to the sensor instruments. All of the instruments except the laser source palette are located on a single breadboard known as the Wavefront Sensor Pallet (WSP). The WSP is located on top of a 5-DOF motion system located at the center of curvature of the test mirror. Two PC's are used to control the OTS. The error in the figure measurement is dominated by the WaveScope's measurement error. An analysis using the absolute wavefront gradient error of 1/50 wave P-V (at 0.6328 microns) provided by the manufacturer leads to a total surface figure measurement error of approximately 1/100 wave rms. This easily meets the requirement of 1/10 wave P-V. The error in radius of curvature is dominated by the Leica's absolute measurement error of VI.5 mm and the focus setting error of Vi.4 mm, giving an overall error of V2 mm. The OTS is currently being used to test the NGST Mirror System Demonstrators (NMSD's) and the Subscale Beryllium Mirror Demonstrator (SBNM).

  4. Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?

    PubMed

    Kiernan, D; Hosking, J; O'Brien, T

    2016-03-01

    Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department.

    PubMed

    Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M

    2015-06-01

    Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.

  6. Novae as distance indicators

    NASA Technical Reports Server (NTRS)

    Ford, Holland C.; Ciardullo, Robin

    1988-01-01

    Nova shells are characteristically prolate with equatorial bands and polar caps. Failure to account for the geometry can lead to large errors in expansion parallaxes for individual novae. When simple prescriptions are used for deriving expansion parallaxes from an ensemble of randomly oriented prolate spheroids, the average distance will be too small by factors of 10 to 15 percent. The absolute magnitudes of the novae will be underestimated and the resulting distance scale will be too small by the same factors. If observations of partially resolved nova shells select for large inclinations, the systematic error in the resulting distance scale could easily be 20 to 30 percent. Extinction by dust in the bulge of M31 may broaden and shift the intrinsic distribution of maximum nova magnitudes versus decay rates. We investigated this possibility by projecting Arp's and Rosino's novae onto a composite B - 6200A color map of M31's bulge. Thirty two of the 86 novae projected onto a smooth background with no underlying structure due to the presence of a dust cloud along the line of sight. The distribution of maximum magnitudes versus fade rates for these unreddened novae is indistinguishable from the distribution for the entire set of novae. It is concluded that novae suffer very little extinction from the filamentary and patchy distribution of dust seen in the bulge of M31. Time average B and H alpha nova luminosity functions are potentially powerful new ways to use novae as standard candles. Modern CCD observations and the photographic light curves of M31 novae found during the last 60 years were analyzed to show that these functions are power laws. Consequently, unless the eruption times for novae are known, the data cannot be used to obtain distances.

  7. Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor

    NASA Astrophysics Data System (ADS)

    Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.

    2017-08-01

    The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.

  8. Quantitative proton magnetic resonance spectroscopy without water suppression

    NASA Astrophysics Data System (ADS)

    Özdemir, M. S.; DeDeene, Y.; Fieremans, E.; Lemahieu, I.

    2009-06-01

    The suppression of the abundant water signal has been traditionally employed to decrease the dynamic range of the NMR signal in proton MRS (1H MRS) in vivo. When using this approach, if the intent is to utilize the water signal as an internal reference for the absolute quantification of metabolites, additional measurements are required for the acquisition of the water signal. This can be prohibitively time-consuming and is not desired clinically. Additionally, traditional water suppression can lead to metabolite alterations. This can be overcome by performing quantitative 1H MRS without water suppression. However, the non-water-suppressed spectra suffer from gradient-induced frequency modulations, resulting in sidebands in the spectrum. Sidebands may overlap with the metabolites, which renders the spectral analysis and quantification problematic. In this paper, we performed absolute quantification of metabolites without water suppression. Sidebands were removed by utilizing the phase of an external reference signal of single resonance to observe the time-varying the static field fluctuations induced by gradient-vibration and deconvolving this phase contamination from the desired NMR signal. The quantification of metabolites was determined after sideband correction by calibrating the metabolite signal intensities against the recorded water signal. The method was evaluated by phantom and in vivo measurements in human brain. The maximum systematic error for the quantified metabolite concentrations was found to be 10.8%, showing the feasibility of the quantification after sideband correction.

  9. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  10. Determination of heat capacity of ionic liquid based nanofluids using group method of data handling technique

    NASA Astrophysics Data System (ADS)

    Sadi, Maryam

    2018-01-01

    In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.

  11. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    PubMed Central

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  12. Corsica: A Multi-Mission Absolute Calibration Site

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.

    2013-09-01

    In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.

  13. Artificial neural network modelling of a large-scale wastewater treatment plant operation.

    PubMed

    Güçlü, Dünyamin; Dursun, Sükrü

    2010-11-01

    Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.

  14. Computational tests of quantum chemical models for excited and ionized states of molecules with phosphorus and sulfur atoms.

    PubMed

    Hahn, David K; RaghuVeer, Krishans; Ortiz, J V

    2014-05-15

    Time-dependent density functional theory (TD-DFT) and electron propagator theory (EPT) are used to calculate the electronic transition energies and ionization energies, respectively, of species containing phosphorus or sulfur. The accuracy of TD-DFT and EPT, in conjunction with various basis sets, is assessed with data from gas-phase spectroscopy. TD-DFT is tested using 11 prominent exchange-correlation functionals on a set of 37 vertical and 19 adiabatic transitions. For vertical transitions, TD-CAM-B3LYP calculations performed with the MG3S basis set are lowest in overall error, having a mean absolute deviation from experiment of 0.22 eV, or 0.23 eV over valence transitions and 0.21 eV over Rydberg transitions. Using a larger basis set, aug-pc3, improves accuracy over the valence transitions via hybrid functionals, but improved accuracy over the Rydberg transitions is only obtained via the BMK functional. For adiabatic transitions, all hybrid functionals paired with the MG3S basis set perform well, and B98 is best, with a mean absolute deviation from experiment of 0.09 eV. The testing of EPT used the Outer Valence Green's Function (OVGF) approximation and the Partial Third Order (P3) approximation on 37 vertical first ionization energies. It is found that OVGF outperforms P3 when basis sets of at least triple-ζ quality in the polarization functions are used. The largest basis set used in this study, aug-pc3, obtained the best mean absolute error from both methods -0.08 eV for OVGF and 0.18 eV for P3. The OVGF/6-31+G(2df,p) level of theory is particularly cost-effective, yielding a mean absolute error of 0.11 eV.

  15. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  16. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets.

    PubMed

    Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.

  17. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets

    PubMed Central

    Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586

  18. Comparing alchemical and physical pathway methods for computing the absolute binding free energy of charged ligands.

    PubMed

    Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald

    2018-06-13

    Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.

  19. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  20. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    PubMed

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  1. Determining the linkage of disease-resistance genes to molecular markers: the LOD-SCORE method revisited with regard to necessary sample sizes.

    PubMed

    Hühn, M

    1995-05-01

    Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.

  2. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  3. Ultrahigh-resolution mapping of peatland microform using ground-based structure from motion with multiview stereo

    NASA Astrophysics Data System (ADS)

    Mercer, Jason J.; Westbrook, Cherie J.

    2016-11-01

    Microform is important in understanding wetland functions and processes. But collecting imagery of and mapping the physical structure of peatlands is often expensive and requires specialized equipment. We assessed the utility of coupling computer vision-based structure from motion with multiview stereo photogrammetry (SfM-MVS) and ground-based photos to map peatland topography. The SfM-MVS technique was tested on an alpine peatland in Banff National Park, Canada, and guidance was provided on minimizing errors. We found that coupling SfM-MVS with ground-based photos taken with a point and shoot camera is a viable and competitive technique for generating ultrahigh-resolution elevations (i.e., <0.01 m, mean absolute error of 0.083 m). In evaluating 100+ viable SfM-MVS data collection and processing scenarios, vegetation was found to considerably influence accuracy. Vegetation class, when accounted for, reduced absolute error by as much as 50%. The logistic flexibility of ground-based SfM-MVS paired with its high resolution, low error, and low cost makes it a research area worth developing as well as a useful addition to the wetland scientists' toolkit.

  4. Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.

    PubMed

    Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa

    2016-03-23

    We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.

  5. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  6. Development and Applications of Orthogonality Constrained Density Functional Theory for the Accurate Simulation of X-Ray Absorption Spectroscopy

    NASA Astrophysics Data System (ADS)

    Derricotte, Wallace D.

    The aim of this dissertation is to address the theoretical challenges of calculating core-excited states within the framework of orthogonality constrained density functional theory (OCDFT). OCDFT is a well-established variational, time independent formulation of DFT for the computation of electronic excited states. In this work, the theory is first extended to compute core-excited states and generalized to calculate multiple excited state solutions. An initial benchmark is performed on a set of 40 unique core-excitations, highlighting that OCDFT excitation energies have a mean absolute error of 1.0 eV. Next, a novel implementation of the spin-free exact-two-component (X2C) one-electron treatment of scalar relativistic effects is presented and combined with OCDFT in an effort to calculate core excited states of transition metal complexes. The X2C-OCDFT spectra of three organotitanium complexes (TiCl4, TiCpCl3, and TiCp2Cl2) are shown to be in good agreement with experimental results and show a maximum absolute error of 5-6 eV. Next the issue of assigning core excited states is addressed by introducing an automated approach to analyzing the excited state MO by quantifying its local contributions using a unique orbital basis known as localized intrinsic valence virtual orbitals (LIVVOs). The utility of this approach is highlighted by studying sulfur core-excitations in ethanethiol and benzenethiol, as well as the hydrogen bonding in the water dimer. Finally, an approach to selectively target specic core-excited states in OCDFT based on atomic orbital subspace projection is presented in an effort to target core excited states of chemisorbed organic molecules. The core excitation spectrum of pyrazine chemisorbed on Si(100) is calculated using OCDFT and further characterized using the LIVVO approach.

  7. Reconstructing the spatial pattern of historical forest land in China in the past 300 years

    NASA Astrophysics Data System (ADS)

    Yang, Xuhong; Jin, Xiaobin; Xiang, Xiaomin; Fan, Yeting; Shan, Wei; Zhou, Yinkang

    2018-06-01

    The reconstruction of the historical forest spatial distribution is of a great significance to understanding land surface cover in historical periods as well as its climate and ecological effects. Based on the maximum scope of historical forest land before human intervention, the characteristics of human behaviors in farmland reclamation and deforestation for heating and timber, we create a spatial evolution model to reconstruct the spatial pattern of historical forest land. The model integrates the land suitability for reclamation, the difficulty of deforestation, the attractiveness of timber trading markets and the abundance of forest resources to calibrate the potential scope of historical forest land with the rationale that the higher the probability of deforestation for reclamation and wood, the greater the likelihood that the forest land will be deforested. Compared to the satellite-based forest land distribution in 2000, about 78.5% of our reconstructed historical forest grids are of the absolute error between 25% and -25% while as many as 95.85% of those grids are of the absolute error between 50% and -50%, which indirectly validates the feasibility of our reconstructed model. Then, we simulate the spatial distribution of forest land in China in 1661, 1724, 1820, 1887, 1933 and 1952 with the grid resolution of 1 km × 1 km. Our result shows that (1) the reconstructed historical forest land in China in the past 300 years concentrates in DaXingAnLing, XiaoXingAnLing, ChangBaiShan, HengDuanShan, DaBaShan, WuYiShan, DaBieShan, XueFengShang and etc.; (2) in terms of the spatial evolution, historical forest land shrank gradually in LiaoHe plains, SongHuaJiang-NenJiang plains and SanJiang plains of eastnorth of China, Sichuan basins and YunNan-GuiZhou Plateaus; and (3) these observations are consistent to the proceeding of agriculture reclamation in China in past 300 years towards Northeast China and Southwest China.

  8. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to the conditions of operation. PMID:24260324

  9. Analysis of the Accuracy of Ballistic Descent from a Circular Circumterrestrial Orbit

    NASA Astrophysics Data System (ADS)

    Sikharulidze, Yu. G.; Korchagin, A. N.

    2002-01-01

    The problem of the transportation of the results of experiments and observations to Earth every so often appears in space research. Its simplest and low-cost solution is the employment of a small ballistic reentry spacecraft. Such a spacecraft has no system of control of the descent trajectory in the atmosphere. This can result in a large spread of landing points, which make it difficult to search for the spacecraft and very often a safe landing. In this work, a choice of a compromise scheme of the flight is considered, which includes the optimum braking maneuver, adequate conditions of the entry into the atmosphere with limited heating and overload, and also the possibility of landing within the limits of a circle with a radius of 12.5 km. The following disturbing factors were taken into account in the analysis of the accuracy of landing: the errors of the braking impulse execution, the variations of the atmosphere density and the wind, the error of the specification of the ballistic coefficient of the reentry spacecraft, and a displacement of its center of mass from the symmetry axis. It is demonstrated that the optimum maneuver assures the maximum absolute value of the reentry angle and the insensitivity of the trajectory of descent with respect to small errors of orientation of the braking engine in the plane of the orbit. It is also demonstrated that the possible error of the landing point due to the error of specification of the ballistic coefficient does not depend (in the linear approximation) upon its value and depends only upon the reentry angle and the accuracy of specification of this coefficient. A guided parachute with an aerodynamic efficiency of about two should be used at the last leg of the reentry trajectory. This will allow one to land in a prescribed range and to produce adequate conditions for the interception of the reentry spacecraft by a helicopter in order to prevent a rough landing.

  10. Cross section of resonant Raman scattering of light by polyenes

    NASA Astrophysics Data System (ADS)

    Verdyugin, V. V.; Burshteyn, K. Ya.; Shorygin, P. P.

    1987-03-01

    An experimental study is presented of the resonant Raman spectra of beta carotene. Absolute differential cross sections are obtained for the most intensive Raman spectral lines with excitation at the absorption maximum. A theoretical analysis is presented of the variation in absolute differential cross section as a function of a number of conjunct double bonds in the polyenes.

  11. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  12. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  13. Validation of the ASTER Global Digital Elevation Model Version 2 over the conterminous United States

    USGS Publications Warehouse

    Gesch, Dean B.; Oimoen, Michael J.; Zhang, Zheng; Meyer, David J.; Danielson, Jeffrey J.

    2012-01-01

    The ASTER Global Digital Elevation Model Version 2 (GDEM v2) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of -0.20 meters is a significant improvement over the GDEM v1 mean error of -3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures) cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height), GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.

  14. Design and validation of a portable, inexpensive and multi-beam timing light system using the Nintendo Wii hand controllers.

    PubMed

    Clark, Ross A; Paterson, Kade; Ritchie, Callan; Blundell, Simon; Bryant, Adam L

    2011-03-01

    Commercial timing light systems (CTLS) provide precise measurement of athletes running velocity, however they are often expensive and difficult to transport. In this study an inexpensive, wireless and portable timing light system was created using the infrared camera in Nintendo Wii hand controllers (NWHC). System creation with gold-standard validation. A Windows-based software program using NWHC to replicate a dual-beam timing gate was created. Firstly, data collected during 2m walking and running trials were validated against a 3D kinematic system. Secondly, data recorded during 5m running trials at various intensities from standing or flying starts were compared to a single beam CTLS and the independent and average scores of three handheld stopwatch (HS) operators. Intraclass correlation coefficient and Bland-Altman plots were used to assess validity. Absolute error quartiles and percentage of trials in absolute error threshold ranges were used to determine accuracy. The NWHC system was valid when compared against the 3D kinematic system (ICC=0.99, median absolute error (MAR)=2.95%). For the flying 5m trials the NWHC system possessed excellent validity and precision (ICC=0.97, MAR<3%) when compared with the CTLS. In contrast, the NWHC system and the HS values during standing start trials possessed only modest validity (ICC<0.75) and accuracy (MAR>8%). A NWHC timing light system is inexpensive, portable and valid for assessing running velocity. Errors in the 5m standing start trials may have been due to erroneous event detection by either the commercial or NWHC-based timing light systems. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  15. A prediction model of short-term ionospheric foF2 based on AdaBoost

    NASA Astrophysics Data System (ADS)

    Zhao, Xiukuan; Ning, Baiqi; Liu, Libo; Song, Gangbing

    2014-02-01

    In this paper, the AdaBoost-BP algorithm is used to construct a new model to predict the critical frequency of the ionospheric F2-layer (foF2) one hour ahead. Different indices were used to characterize ionospheric diurnal and seasonal variations and their dependence on solar and geomagnetic activity. These indices, together with the current observed foF2 value, were input into the prediction model and the foF2 value at one hour ahead was output. We analyzed twenty-two years' foF2 data from nine ionosonde stations in the East-Asian sector in this work. The first eleven years' data were used as a training dataset and the second eleven years' data were used as a testing dataset. The results show that the performance of AdaBoost-BP is better than those of BP Neural Network (BPNN), Support Vector Regression (SVR) and the IRI model. For example, the AdaBoost-BP prediction absolute error of foF2 at Irkutsk station (a middle latitude station) is 0.32 MHz, which is better than 0.34 MHz from BPNN, 0.35 MHz from SVR and also significantly outperforms the IRI model whose absolute error is 0.64 MHz. Meanwhile, AdaBoost-BP prediction absolute error at Taipei station from the low latitude is 0.78 MHz, which is better than 0.81 MHz from BPNN, 0.81 MHz from SVR and 1.37 MHz from the IRI model. Finally, the variety characteristics of the AdaBoost-BP prediction error along with seasonal variation, solar activity and latitude variation were also discussed in the paper.

  16. 40 CFR Table 4 to Subpart Ooo of... - Operating Parameter Levels

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specific gravity Condenser Exit temperature Maximum temperature Carbon absorber Total regeneration steam or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum...

  17. 40 CFR Table 4 to Subpart Ooo of... - Operating Parameter Levels

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specific gravity Condenser Exit temperature Maximum temperature Carbon absorber Total regeneration steam or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum...

  18. 40 CFR Table 4 to Subpart Ooo of... - Operating Parameter Levels

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specific gravity Condenser Exit temperature Maximum temperature Carbon absorber Total regeneration steam or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum...

  19. Validation of a hand-held point of care device for lactate in adult and pediatric patients using traditional and locally-smoothed median and maximum absolute difference curves.

    PubMed

    Colon-Franco, Jessica Marie; Lo, Stanley F; Tarima, Sergey S; Gourlay, David; Drendel, Amy L; Brook Lerner, E

    2017-05-01

    Lactate is commonly used in septic patients and is a viable biomarker for trauma patients. Its pre-hospital use could assist triaging and managing patients with these conditions. We evaluated the analytical performance of the point-of-care (POC) StatStrip Xpress Lactate Meter (Nova Biomedical) and compared it to the ABL 800 (Radiometer). We measured lactate in 250 adult and 250 pediatric whole blood samples in 2 laboratories. The performance of the POC meter was assessed by traditional linear regression and Bland-Altman plots, and locally-smoothed (LS) median absolute difference and maximum absolute difference (MAD and MaxAD) curves. The StatStrip was linear with acceptable reproducibility at clinically relevant concentrations. Correlation with the ABL800 showed a negative bias for both populations with slope, bias ±SD (% bias) of 0.78, -0.4±0.7 (-14.5%) in children and 0.80-0.3±0.6 (-13.3%) in adults. The proportional bias appeared more significant at concentrations >4mmol/l (36.0mg/dl). The StatStrip misclassified 7.6 and 8.8% pediatric and adult samples, respectively, to lower risk categories defined using guidelines driven cut-offs. The LS MAD curves identified one breakout, concentration where the LS MAD exceeds the total allowable error limit of 0.3mmol/l (2.7mg/dl), at lactate concentrations of 3.8 and 3.2mmol/l (34.2 and 28.8mg/dl) in the pediatric and adult curves, respectively. Breakthroughs, points at which the LS MaxAD curve exceeds the 95th percentile of MaxADs, occur at concentrations above 7.5mmol/l (67.6mg/dl) for both populations where the performance of the POC meter became erratic. We concluded that if serial lactate measurements are performed, the same method should be used for baseline and follow up measurements. The LS MAD and LS MaxAD curves allowed visual and quantitative mapping of the performance of the lactate POC meter over the range of concentrations measured. This approach seems useful for the identification of points at which the performance of a POC meter differs significantly from a comparison method and thresholds of poor analytical performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Cross-correlation of point series using a new method

    NASA Technical Reports Server (NTRS)

    Strothers, Richard B.

    1994-01-01

    Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.

  1. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  2. Numerical simulation of mushrooms during freezing using the FEM and an enthalpy: Kirchhoff formulation

    NASA Astrophysics Data System (ADS)

    Santos, M. V.; Lespinard, A. R.

    2011-12-01

    The shelf life of mushrooms is very limited since they are susceptible to physical and microbial attack; therefore they are usually blanched and immediately frozen for commercial purposes. The aim of this work was to develop a numerical model using the finite element technique to predict freezing times of mushrooms considering the actual shape of the product. The original heat transfer equation was reformulated using a combined enthalpy-Kirchhoff formulation, therefore an own computational program using Matlab 6.5 (MathWorks, Natick, Massachusetts) was developed, considering the difficulties encountered when simulating this non-linear problem in commercial softwares. Digital images were used to generate the irregular contour and the domain discretization. The numerical predictions agreed with the experimental time-temperature curves during freezing of mushrooms (maximum absolute error <3.2°C) obtaining accurate results and minimum computer processing times. The codes were then applied to determine required processing times for different operating conditions (external fluid temperatures and surface heat transfer coefficients).

  3. Predicting tropical cyclone intensity using satellite measured equivalent blackbody temperatures of cloud tops. [regression analysis

    NASA Technical Reports Server (NTRS)

    Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.

    1978-01-01

    A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.

  4. Absolute measurements of large mirrors

    NASA Astrophysics Data System (ADS)

    Su, Peng

    The ability to produce mirrors for large astronomical telescopes is limited by the accuracy of the systems used to test the surfaces of such mirrors. Typically the mirror surfaces are measured by comparing their actual shapes to a precision master, which may be created using combinations of mirrors, lenses, and holograms. The work presented here develops several optical testing techniques that do not rely on a large or expensive precision, master reference surface. In a sense these techniques provide absolute optical testing. The Giant Magellan Telescope (GMT) has been designed with a 350 m 2 collecting area provided by a 25 m diameter primary mirror made out from seven circular independent mirror segments. These segments create an equivalent f/0.7 paraboloidal primary mirror consisting of a central segment and six outer segments. Each of the outer segments is 8.4 m in diameter and has an off-axis aspheric shape departing 14.5 mm from the best-fitting sphere. Much of the work in this dissertation is motivated by the need to measure the surfaces or such large mirrors accurately, without relying on a large or expensive precision reference surface. One method for absolute testing describing in this dissertation uses multiple measurements relative to a reference surface that is located in different positions with respect to the test surface of interest. The test measurements are performed with an algorithm that is based on the maximum likelihood (ML) method. Some methodologies for measuring large flat surfaces in the 2 m diameter range and for measuring the GMT primary mirror segments were specifically developed. For example, the optical figure of a 1.6-m flat mirror was determined to 2 nm rms accuracy using multiple 1-meter sub-aperture measurements. The optical figure of the reference surface used in the 1-meter sub-aperture measurements was also determined to the 2 nm level. The optical test methodology for a 1.7-m off axis parabola was evaluated by moving several times the mirror under test in relation to the test system. The result was a separation of errors in the optical test system to those errors from the mirror under test. This method proved to be accurate to 12nm rms. Another absolute measurement technique discussed in this dissertation utilizes the property of a paraboloidal surface of reflecting rays parallel to its optical axis, to its focal point. We have developed a scanning pentaprism technique that exploits this geometry to measure off-axis paraboloidal mirrors such as the GMT segments. This technique was demonstrated on a 1.7 m diameter prototype and proved to have a precision of about 50 nm rms.

  5. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  6. Customization, control, and characterization of a commercial haptic device for high-fidelity rendering of weak forces.

    PubMed

    Gurari, Netta; Baud-Bovy, Gabriel

    2014-09-30

    The emergence of commercial haptic devices offers new research opportunities to enhance our understanding of the human sensory-motor system. Yet, commercial device capabilities have limitations which need to be addressed. This paper describes the customization of a commercial force feedback device for displaying forces with a precision that exceeds the human force perception threshold. The device was outfitted with a multi-axis force sensor and closed-loop controlled to improve its transparency. Additionally, two force sensing resistors were attached to the device to measure grip force. Force errors were modeled in the frequency- and time-domain to identify contributions from the mass, viscous friction, and Coulomb friction during open- and closed-loop control. The effect of user interaction on system stability was assessed in the context of a user study which aimed to measure force perceptual thresholds. Findings based on 15 participants demonstrate that the system maintains stability when rendering forces ranging from 0-0.20 N, with an average maximum absolute force error of 0.041 ± 0.013 N. Modeling the force errors revealed that Coulomb friction and inertia were the main contributors to force distortions during respectively slow and fast motions. Existing commercial force feedback devices cannot render forces with the required precision for certain testing scenarios. Building on existing robotics work, this paper shows how a device can be customized to make it reliable for studying the perception of weak forces. The customized and closed-loop controlled device is suitable for measuring force perceptual thresholds. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. THE FAST DECLINING TYPE Ia SUPERNOVA 2003gs, AND EVIDENCE FOR A SIGNIFICANT DISPERSION IN NEAR-INFRARED ABSOLUTE MAGNITUDES OF FAST DECLINERS AT MAXIMUM LIGHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krisciunas, Kevin; Marion, G. H.; Suntzeff, Nicholas B.

    2009-12-15

    We obtained optical photometry of SN 2003gs on 49 nights, from 2 to 494 days after T(B {sub max}). We also obtained near-IR photometry on 21 nights. SN 2003gs was the first fast declining Type Ia SN that has been well observed since SN 1999by. While it was subluminous in optical bands compared to more slowly declining Type Ia SNe, it was not subluminous at maximum light in the near-IR bands. There appears to be a bimodal distribution in the near-IR absolute magnitudes of Type Ia SNe at maximum light. Those that peak in the near-IR after T(B {sub max})more » are subluminous in the all bands. Those that peak in the near-IR prior to T(B {sub max}), such as SN 2003gs, have effectively the same near-IR absolute magnitudes at maximum light regardless of the decline rate {delta}m {sub 15}(B). Near-IR spectral evidence suggests that opacities in the outer layers of SN 2003gs are reduced much earlier than for normal Type Ia SNe. That may allow {gamma} rays that power the luminosity to escape more rapidly and accelerate the decline rate. This conclusion is consistent with the photometric behavior of SN 2003gs in the IR, which indicates a faster than normal decline from approximately normal peak brightness.« less

  8. The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.

    DTIC Science & Technology

    1985-05-08

    also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies

  9. Instrumentation and First Results of the Reflected Solar Demonstration System for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Hair, Jason; McAndrew, Brendan; Jennings, Don; Rabin, Douglas; Daw, Adrian; Lundsford, Allen

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission key goals include enabling observation of high accuracy long-term climate change trends, use of these observations to test and improve climate forecasts, and calibration of operational and research sensors. The spaceborne instrument suites include a reflected solar spectroradiometer, emitted infrared spectroradiometer, and radio occultation receivers. The requirement for the RS instrument is that derived reflectance must be traceable to Sl standards with an absolute uncertainty of <0.3% and the error budget that achieves this requirement is described in previo1L5 work. This work describes the Solar/Lunar Absolute Reflectance Imaging Spectroradiometer (SOLARIS), a calibration demonstration system for RS instrument, and presents initial calibration and characterization methods and results. SOLARIS is an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm over a full field-of-view of 10 degrees with 0.27 milliradian sampling. Results from laboratory measurements including use of integrating spheres, transfer radiometers and spectral standards combined with field-based solar and lunar acquisitions are presented. These results will be used to assess the accuracy and repeatability of the radiometric and spectral characteristics of SOLARIS, which will be presented against the sensor-level requirements addressed in the CLARREO RS instrument error budget.

  10. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  11. Application of Intra-Oral Dental Scanners in the Digital Workflow of Implantology

    PubMed Central

    van der Meer, Wicher J.; Andriessen, Frank S.; Wismeijer, Daniel; Ren, Yijin

    2012-01-01

    Intra-oral scanners will play a central role in digital dentistry in the near future. In this study the accuracy of three intra-oral scanners was compared. Materials and methods: A master model made of stone was fitted with three high precision manufactured PEEK cylinders and scanned with three intra-oral scanners: the CEREC (Sirona), the iTero (Cadent) and the Lava COS (3M). In software the digital files were imported and the distance between the centres of the cylinders and the angulation between the cylinders was assessed. These values were compared to the measurements made on a high accuracy 3D scan of the master model. Results: The distance errors were the smallest and most consistent for the Lava COS. The distance errors for the Cerec were the largest and least consistent. All the angulation errors were small. Conclusions: The Lava COS in combination with a high accuracy scanning protocol resulted in the smallest and most consistent errors of all three scanners tested when considering mean distance errors in full arch impressions both in absolute values and in consistency for both measured distances. For the mean angulation errors, the Lava COS had the smallest errors between cylinders 1–2 and the largest errors between cylinders 1–3, although the absolute difference with the smallest mean value (iTero) was very small (0,0529°). An expected increase in distance and/or angular errors over the length of the arch due to an accumulation of registration errors of the patched 3D surfaces could be observed in this study design, but the effects were statistically not significant. Clinical relevance For making impressions of implant cases for digital workflows, the most accurate scanner with the scanning protocol that will ensure the most accurate digital impression should be used. In our study model that was the Lava COS with the high accuracy scanning protocol. PMID:22937030

  12. [Design and accuracy analysis of upper slicing system of MSCT].

    PubMed

    Jiang, Rongjian

    2013-05-01

    The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.

  13. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  14. Mainstem Clearwater River Study: Assessment for Salmonid Spawning, Incubation, and Rearing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conner, William P.

    1989-01-01

    Chinook salmon reproduced naturally in the Clearwater River until damming of the lower mainstem in 1927 impeded upstream spawning migrations and decimated the populations. Removal of the Washington Water Power Dam in 1973 reopened upriver passage. This study was initiated to determine the feasibility of re-introducing chinook salmon into the lower mainstem Clearwater River based on the temperature and flow regimes, water quality, substrate, and invertebrate production since the completion of Dworshak Dam in 1972. Temperature data obtained from the United States Geological Survey gaging stations at Peck and Spalding, Idaho, were used to calculate average minimum and maximum watermore » temperature on a daily, monthly and yearly basis. The coldest and warmest (absolute minimum and maximum) temperatures that have occurred in the past 15 years were also identified. Our analysis indicates that average lower mainstem Clearwater River water temperatures are suitable for all life stages of chinook salmon, and also for steelhead trout rearing. In some years absolute maximum water temperatures in late summer may postpone adult staging and spawning. Absolute minimum temperatures have been recorded that could decrease overwinter survival of summer chinook juveniles and fall chinook eggs depending on the quality of winter hiding cover and the prevalence of intra-gravel freezing in the lower mainstem Clearwater River.« less

  15. Hybrid electroluminescent devices

    DOEpatents

    Shiang, Joseph John; Duggal, Anil Raj; Michael, Joseph Darryl

    2010-08-03

    A hybrid electroluminescent (EL) device comprises at least one inorganic diode element and at least one organic EL element that are electrically connected in series. The absolute value of the breakdown voltage of the inorganic diode element is greater than the absolute value of the maximum reverse bias voltage across the series. The inorganic diode element can be a power diode, a Schottky barrier diode, or a light-emitting diode.

  16. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.

  17. Comparison of INAR(1)-Poisson model and Markov prediction model in forecasting the number of DHF patients in west java Indonesia

    NASA Astrophysics Data System (ADS)

    Ahdika, Atina; Lusiyana, Novyan

    2017-02-01

    World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).

  18. The use of fractional orders in the determination of birefringence of highly dispersive materials by the channelled spectrum method

    NASA Astrophysics Data System (ADS)

    Nagarajan, K.; Shashidharan Nair, C. K.

    2007-07-01

    The channelled spectrum employing polarized light interference is a very convenient method for the study of dispersion of birefringence. However, while using this method, the absolute order of the polarized light interference fringes cannot be determined easily. Approximate methods are therefore used to estimate the order. One of the approximations is that the dispersion of birefringence across neighbouring integer order fringes is negligible. In this paper, we show how this approximation can cause errors. A modification is reported whereby the error in the determination of absolute fringe order can be reduced using fractional orders instead of integer orders. The theoretical background for this method supported with computer simulation is presented. An experimental arrangement implementing these modifications is described. This method uses a Constant Deviation Spectrometer (CDS) and a Soleil Babinet Compensator (SBC).

  19. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models

    PubMed Central

    de Jesus, Karla; Ayala, Helon V. H.; de Jesus, Kelly; Coelho, Leandro dos S.; Medeiros, Alexandre I.A.; Abraldes, José A.; Vaz, Mário A.P.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo

    2018-01-01

    Abstract Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances. PMID:29599857

  20. THE ACUTE EFFECTS OF CONCENTRIC VERSUS ECCENTRIC MUSCLE FATIGUE ON SHOULDER ACTIVE REPOSITIONING SENSE

    PubMed Central

    2017-01-01

    Purpose/Background Shoulder proprioception is essential in the activities of daily living as well as in sports. Acute muscle fatigue is believed to cause a deterioration of proprioception, increasing the risk of injury. The purpose of this study was to evaluate if fatigue of the shoulder external rotators during eccentric versus concentric activity affects shoulder joint proprioception as determined by active reproduction of position. Study design Quasi-experimental trial. Methods Twenty-two healthy subjects with no recent history of shoulder pathology were randomly allocated to either a concentric or an eccentric exercise group for fatiguing the shoulder external rotators. Proprioception was assessed before and after the fatiguing protocol using an isokinetic dynamometer, by measuring active reproduction of position at 30 ° of shoulder external rotation, reported as absolute angular error. The fatiguing protocol consisted of sets of fifteen consecutive external rotator muscle contractions in either the concentric or eccentric action. The subjects were exercised until there was a 30% decline from the peak torque of the subjects’ maximal voluntary contraction over three consecutive muscle contractions. Results A one-way analysis of variance test revealed no statistical difference in absolute angular error (p > 0.05) between concentric and eccentric groups. Moreover, no statistical difference (p > 0.05) was found in absolute angular error between pre- and post-fatigue in either group. Conclusions Eccentric exercise does not seem to acutely affect shoulder proprioception to a larger extent than concentric exercise. Level of evidence 2b PMID:28515976

  1. A new analytical method for quantification of olive and palm oil in blends with other vegetable edible oils based on the chromatographic fingerprints from the methyl-transesterified fraction.

    PubMed

    Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis

    2017-03-01

    A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    PubMed Central

    Andrade-Delgado, Laura; Telich-Tarriba, Jose E.; Fuente-del-Campo, Antonio; Altamirano-Arcos, Carlos A.

    2018-01-01

    Summary: Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies. PMID:29464171

  3. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  4. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    PubMed

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  5. Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China

    PubMed Central

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546

  6. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  7. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    PubMed

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  8. Building blocks for automated elucidation of metabolites: machine learning methods for NMR prediction.

    PubMed

    Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph

    2008-09-25

    Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.

  9. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  10. A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.

    PubMed

    Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang

    2014-07-31

    In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.

  11. Radarclinometry: Bootstrapping the radar reflectance function from the image pixel-signal frequency distribution and an altimetry profile

    USGS Publications Warehouse

    Wildey, R.L.

    1988-01-01

    A method is derived for determining the dependence of radar backscatter on incidence angle that is applicable to the region corresponding to a particular radar image. The method is based on enforcing mathematical consistency between the frequency distribution of the image's pixel signals (histogram of DN values with suitable normalizations) and a one-dimensional frequency distribution of slope component, as might be obtained from a radar or laser altimetry profile in or near the area imaged. In order to achieve a unique solution, the auxiliary assumption is made that the two-dimensional frequency distribution of slope is isotropic. The backscatter is not derived in absolute units. The method is developed in such a way as to separate the reflectance function from the pixel-signal transfer characteristic. However, these two sources of variation are distinguishable only on the basis of a weak dependence on the azimuthal component of slope; therefore such an approach can be expected to be ill-conditioned unless the revision of the transfer characteristic is limited to the determination of an additive instrumental background level. The altimetry profile does not have to be registered in the image, and the statistical nature of the approach minimizes pixel noise effects and the effects of a disparity between the resolutions of the image and the altimetry profile, except in the wings of the distribution where low-number statistics preclude accuracy anyway. The problem of dealing with unknown slope components perpendicular to the profiling traverse, which besets the one-to-one comparison between individual slope components and pixel-signal values, disappears in the present approach. In order to test the resulting algorithm, an artificial radar image was generated from the digitized topographic map of the Lake Champlain West quadrangle in the Adirondack Mountains, U.S.A., using an arbitrarily selected reflectance function. From the same map, a one-dimensional frequency distribution of slope component was extracted. The algorithm recaptured the original reflectance function to the degree that, for the central 90% of the data, the discrepancy translates to a RMS slope error of 0.1 ???. For the central 99% of the data, the maximum error translates to 1 ???; at the absolute extremes of the data the error grows to 6 ???. ?? 1988 Kluwer Academic Publishers.

  12. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Network Adjustment of Orbit Errors in SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Bahr, Hermann; Hanssen, Ramon

    2010-03-01

    Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.

  14. Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans

    PubMed Central

    Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa

    2016-01-01

    Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573

  15. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    NASA Astrophysics Data System (ADS)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  16. Numerical model estimating the capabilities and limitations of the fast Fourier transform technique in absolute interferometry

    NASA Astrophysics Data System (ADS)

    Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.

    1996-05-01

    A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.

  17. TU-F-17A-05: Calculating Tumor Trajectory and Dose-Of-The-Day for Highly Mobile Tumors Using Cone-Beam CT Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, B; Miften, M

    2014-06-15

    Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. We developed a method using these projections to determine the trajectory and dose of highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, where the trajectory mimicked a lung tumor with high amplitude (2.4 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each projection. A Gaussian probability density function for tumor position was calculated which best fit the observed trajectory ofmore » the BB in the imager geometry. Two methods to improve the accuracy of tumor track reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation, and second, using the Monte Carlo method to sample the estimated Gaussian tumor position distribution. 15 clinically-drawn abdominal/lung CTV volumes were used to evaluate the accuracy of the proposed methods by comparing the known and calculated BB trajectories. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square (RMS) trajectory errors were lower than 5% of marker amplitude. Use of respiratory phase information decreased RMS errors by 30%, and decreased the fraction of large errors (>3 mm) by half. Mean dose to the clinical volumes was calculated with an average error of 0.1% and average absolute error of 0.3%. Dosimetric parameters D90/D95 were determined within 0.5% of maximum dose. Monte-Carlo sampling increased RMS trajectory and dosimetric errors slightly, but prevented over-estimation of dose in trajectories with high noise. Conclusions: Tumor trajectory and dose-of-the-day were accurately calculated using CBCT projections. This technique provides a widely-available method to evaluate highly-mobile tumors, and could facilitate better strategies to mitigate or compensate for motion during SBRT.« less

  18. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  19. Absolute emission cross sections for electron capture reactions of C2+, N3+, N4+ and O3+ ions in collisions with Li(2s) atoms

    NASA Astrophysics Data System (ADS)

    Rieger, G.; Pinnington, E. H.; Ciubotariu, C.

    2000-12-01

    Absolute photon emission cross sections following electron capture reactions have been measured for C2+, N3+, N4+ and O3+ ions colliding with Li(2s) atoms at keV energies. The results are compared with calculations using the extended classical over-the-barrier model by Niehaus. We explore the limits of our experimental method and present a detailed discussion of experimental errors.

  20. Multipath induced errors in meteorological Doppler/interferometer location systems

    NASA Technical Reports Server (NTRS)

    Wallace, R. G.

    1984-01-01

    One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.

  1. Using lean to improve medication administration safety: in search of the "perfect dose".

    PubMed

    Ching, Joan M; Long, Christina; Williams, Barbara L; Blackmore, C Craig

    2013-05-01

    At Virginia Mason Medical Center (Seattle), the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study was used in combination with Lean quality improvement efforts to address medication administration safety. Lean interventions were targeted at improving the medication room layout, applying visual controls, and implementing nursing standard work. The interventions were designed to prevent medication administration errors through improving six safe practices: (1) comparing medication with medication administration record, (2) labeling medication, (3) checking two forms of patient identification, (4) explaining medication to patient, (5) charting medication immediately, and (6) protecting the process from distractions/interruptions. Trained nurse auditors observed 9,244 doses for 2,139 patients. Following the intervention, the number of safe-practice violations decreased from 83 violations/100 doses at baseline (January 2010-March 2010) to 42 violations/100 doses at final follow-up (July 2011-September 2011), resulting in an absolute risk reduction of 42 violations/100 doses (95% confidence interval [CI]: 35-48), p < .001). The number of medication administration errors decreased from 10.3 errors/100 doses at baseline to 2.8 errors/100 doses at final follow-up (absolute risk reduction: 7 violations/100 doses [95% CI: 5-10, p < .001]). The "perfect dose" score, reflecting compliance with all six safe practices and absence of any of the eight medication administration errors, improved from 37 in compliance/100 doses at baseline to 68 in compliance/100 doses at the final follow-up. Lean process improvements coupled with direct observation can contribute to substantial decreases in errors in nursing medication administration.

  2. The Effects of Cryotherapy on Knee Joint Position Sense and Force Production Sense in Healthy Individuals

    PubMed Central

    Furmanek, Mariusz P.; Słomka, Kajetan J.; Sobiesiak, Andrzej; Rzepko, Marian; Juras, Grzegorz

    2018-01-01

    Abstract The proprioceptive information received from mechanoreceptors is potentially responsible for controlling the joint position and force differentiation. However, it is unknown whether cryotherapy influences this complex mechanism. Previously reported results are not universally conclusive and sometimes even contradictory. The main objective of this study was to investigate the impact of local cryotherapy on knee joint position sense (JPS) and force production sense (FPS). The study group consisted of 55 healthy participants (age: 21 ± 2 years, body height: 171.2 ± 9 cm, body mass: 63.3 ± 12 kg, BMI: 21.5 ± 2.6). Local cooling was achieved with the use of gel-packs cooled to -2 ± 2.5°C and applied simultaneously over the knee joint and the quadriceps femoris muscle for 20 minutes. JPS and FPS were evaluated using the Biodex System 4 Pro apparatus. Repeated measures analysis of variance (ANOVA) did not show any statistically significant changes of the JPS and FPS under application of cryotherapy for all analyzed variables: the JPS’s absolute error (p = 0.976), its relative error (p = 0.295), and its variable error (p = 0.489); the FPS’s absolute error (p = 0.688), its relative error (p = 0.193), and its variable error (p = 0.123). The results indicate that local cooling does not affect proprioceptive acuity of the healthy knee joint. They also suggest that local limited cooling before physical activity at low velocity did not present health or injury risk in this particular study group. PMID:29599858

  3. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    PubMed

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  4. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  5. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  6. Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam

    NASA Astrophysics Data System (ADS)

    N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.

    In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3

  7. Suicide Mortality Trends in Four Provinces of Iran with the Highest Mortality, from 2006-2016.

    PubMed

    Nazari Kangavari, Hajar; Shojaei, Ahmad; Hashemi Nazari, Seyed Saeed

    2017-06-14

    Suicide is a major cause of unnatural deaths in the world. Its incidence is higher in western provinces of Iran. So far, there has not been any time series analysis of suicide in western provinces. The purpose of this study was to analyze suicide mortality data from 2006 to 2016 as well as to forecast the number of suicides for 2017 in four provinces of Iran (Ilam, Kermanshah and Lorestan and Kohgiluyeh and Boyer-Ahmad). Descriptive-analytic study. Data were analyzed by time- series analysis using R software. Three automatic methods (Auto.arima, ETS (Error Transitional Seasonality) and time series linear model (TSLM)) were fitted on the data. The best model after cross validation according to the mean absolute error measure was selected for forecasting. Totally, 7004 suicidal deaths occurred of which, 4259 were male and 2745 were female. The mean age of the study population was (32.05 ± 15.48 yr). Hanging and self-immolation were the most frequent types of suicide in men and women, respectively. The maximum and minimum number of suicides was occurred in July and August as well as January respectively. It is suggested that intervention measures should be designed in order to decrease the suicide rate particularly in the age group of 15-29 yr, and implemented as a pilot study, especially in these four provinces of Iran, which have a relatively high suicide rate.

  8. Photometric correction for an optical CCD-based system based on the sparsity of an eight-neighborhood gray gradient.

    PubMed

    Zhang, Yuzhong; Zhang, Yan

    2016-07-01

    In an optical measurement and analysis system based on a CCD, due to the existence of optical vignetting and natural vignetting, photometric distortion, in which the intensity falls off away from the image center, affects the subsequent processing and measuring precision severely. To deal with this problem, an easy and straightforward method used for photometric distortion correction is presented in this paper. This method introduces a simple polynomial fitting model of the photometric distortion function and employs a particle swarm optimization algorithm to get these model parameters by means of a minimizing eight-neighborhood gray gradient. Compared with conventional calibration methods, this method can obtain the profile information of photometric distortion from only a single common image captured by the optical CCD-based system, with no need for a uniform luminance area source used as a standard reference source and relevant optical and geometric parameters in advance. To illustrate the applicability of this method, numerical simulations and photometric distortions with different lens parameters are evaluated using this method in this paper. Moreover, the application example of temperature field correction for casting billets also demonstrates the effectiveness of this method. The experimental results show that the proposed method is able to achieve the maximum absolute error for vignetting estimation of 0.0765 and the relative error for vignetting estimation from different background images of 3.86%.

  9. Temperature-based estimation of global solar radiation using soft computing methodologies

    NASA Astrophysics Data System (ADS)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak

    2016-07-01

    Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.

  10. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  11. Calculated Condenser Performance for a Mercury-Turbine Power Plant for Aircraft

    NASA Technical Reports Server (NTRS)

    Doyle, Ronald B.

    1948-01-01

    As part of an investigation af the application of nuclear energy to various types of power plants for aircraft, calculations have been made to determine the effect of several operating conditions on the performance of condensers for mercury-turbine power plants. The analysis covered 8 range of turbine-outlet pressures from 1 to 200 pounds per square inch absolute, turbine-inlet pressures from 300 to 700 pounds per square inch absolute,and a range of condenser cooling-air pressure drops, airplane flight speeds, and altitudes. The maximum load-carrying capacity (available for the nuclear reactor, working fluid, and cargo) of a mercury-turbine powered aircraft would be about half the gross weight of the airplane at a flight speed of 509 miles per hour and an altitude of 30,000 feet. This maximum is obtained with specific condenser frontal areas of 0.0063 square foot per net thrust horsepower with the condenser in a nacelle and 0.0060 square foot per net thrust horsepower with the condenser submerged in the wings (no external condenser drag) for a turbine-inlet pressure of 500 pounds per square inch absolute, a turbine-outlet pressure of 10 pounds per square inch absolute, and 8 turbine-inlet temperature of 1600 F.

  12. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  13. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  14. 40 CFR Table 7 to Subpart U of... - Operating Parameters for Which Monitoring Levels Are Required To Be Established for Continuous...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Condenser Exit temperature Maximum temperature. Carbon adsorber Total regeneration steam flow or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum flow or...

  15. 40 CFR Table 7 to Subpart U of... - Operating Parameters for Which Monitoring Levels Are Required To Be Established for Continuous...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Condenser Exit temperature Maximum temperature. Carbon adsorber Total regeneration steam flow or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum flow or...

  16. 40 CFR Table 7 to Subpart U of... - Operating Parameters for Which Monitoring Levels Are Required To Be Established for Continuous...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Condenser Exit temperature Maximum temperature. Carbon adsorber Total regeneration steam flow or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum flow or...

  17. 40 CFR Table 7 to Subpart U of... - Operating Parameters for Which Monitoring Levels Are Required To Be Established for Continuous...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Condenser Exit temperature Maximum temperature. Carbon adsorber Total regeneration steam flow or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum flow or...

  18. 40 CFR Table 7 to Subpart U of... - Operating Parameters for Which Monitoring Levels Are Required To Be Established for Continuous...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Condenser Exit temperature Maximum temperature. Carbon adsorber Total regeneration steam flow or nitrogen flow, or pressure (gauge or absolute) a during carbon bed regeneration cycle; and temperature of the carbon bed after regeneration (and within 15 minutes of completing any cooling cycle(s)) Maximum flow or...

  19. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    PubMed

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  20. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    PubMed Central

    Tan, Lilong; Yan, Shuhua

    2018-01-01

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897

  1. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  2. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  3. Improved modification for the density-functional theory calculation of thermodynamic properties for C-H-O composite compounds.

    PubMed

    Liu, Min Hsien; Chen, Cheng; Hong, Yaw Shun

    2005-02-08

    A three-parametric modification equation and the least-squares approach are adopted to calibrating hybrid density-functional theory energies of C(1)-C(10) straight-chain aldehydes, alcohols, and alkoxides to accurate enthalpies of formation DeltaH(f) and Gibbs free energies of formation DeltaG(f), respectively. All calculated energies of the C-H-O composite compounds were obtained based on B3LYP6-311++G(3df,2pd) single-point energies and the related thermal corrections of B3LYP6-31G(d,p) optimized geometries. This investigation revealed that all compounds had 0.05% average absolute relative error (ARE) for the atomization energies, with mean value of absolute error (MAE) of just 2.1 kJ/mol (0.5 kcal/mol) for the DeltaH(f) and 2.4 kJ/mol (0.6 kcal/mol) for the DeltaG(f) of formation.

  4. How is the weather? Forecasting inpatient glycemic control

    PubMed Central

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M

    2017-01-01

    Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125

  5. Verifying Safeguards Declarations with INDEPTH: A Sensitivity Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grogan, Brandon R; Richards, Scott

    2017-01-01

    A series of ORIGEN calculations were used to simulate the irradiation and decay of a number of spent fuel assemblies. These simulations focused on variations in the irradiation history that achieved the same terminal burnup through a different set of cycle histories. Simulated NDA measurements were generated for each test case from the ORIGEN data. These simulated measurement types included relative gammas, absolute gammas, absolute gammas plus neutrons, and concentrations of a set of six isotopes commonly measured by NDA. The INDEPTH code was used to reconstruct the initial enrichment, cooling time, and burnup for each irradiation using each simulatedmore » measurement type. The results were then compared to the initial ORIGEN inputs to quantify the size of the errors induced by the variations in cycle histories. Errors were compared based on the underlying changes to the cycle history, as well as the data types used for the reconstructions.« less

  6. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  7. Implementation of Automatic Clustering Algorithm and Fuzzy Time Series in Motorcycle Sales Forecasting

    NASA Astrophysics Data System (ADS)

    Rasim; Junaeti, E.; Wirantika, R.

    2018-01-01

    Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.

  8. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  9. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  10. Method For Detecting The Presence Of A Ferromagnetic Object

    DOEpatents

    Roybal, Lyle G.

    2000-11-21

    A method for detecting a presence or an absence of a ferromagnetic object within a sensing area may comprise the steps of sensing, during a sample time, a magnetic field adjacent the sensing area; producing surveillance data representative of the sensed magnetic field; determining an absolute value difference between a maximum datum and a minimum datum comprising the surveillance data; and determining whether the absolute value difference has a positive or negative sign. The absolute value difference and the corresponding positive or negative sign thereof forms a representative surveillance datum that is indicative of the presence or absence in the sensing area of the ferromagnetic material.

  11. Pixel-based absolute surface metrology by three flat test with shifted and rotated maps

    NASA Astrophysics Data System (ADS)

    Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang

    2018-03-01

    In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.

  12. Piezocomposite Actuator Arrays for Correcting and Controlling Wavefront Error in Reflectors

    NASA Technical Reports Server (NTRS)

    Bradford, Samuel Case; Peterson, Lee D.; Ohara, Catherine M.; Shi, Fang; Agnes, Greg S.; Hoffman, Samuel M.; Wilkie, William Keats

    2012-01-01

    Three reflectors have been developed and tested to assess the performance of a distributed network of piezocomposite actuators for correcting thermal deformations and total wave-front error. The primary testbed article is an active composite reflector, composed of a spherically curved panel with a graphite face sheet and aluminum honeycomb core composite, and then augmented with a network of 90 distributed piezoelectric composite actuators. The piezoelectric actuator system may be used for correcting as-built residual shape errors, and for controlling low-order, thermally-induced quasi-static distortions of the panel. In this study, thermally-induced surface deformations of 1 to 5 microns were deliberately introduced onto the reflector, then measured using a speckle holography interferometer system. The reflector surface figure was subsequently corrected to a tolerance of 50 nm using the actuators embedded in the reflector's back face sheet. Two additional test articles were constructed: a borosilicate at window at 150 mm diameter with 18 actuators bonded to the back surface; and a direct metal laser sintered reflector with spherical curvature, 230 mm diameter, and 12 actuators bonded to the back surface. In the case of the glass reflector, absolute measurements were performed with an interferometer and the absolute surface was corrected. These test articles were evaluated to determine their absolute surface control capabilities, as well as to assess a multiphysics modeling effort developed under this program for the prediction of active reflector response. This paper will describe the design, construction, and testing of active reflector systems under thermal loads, and subsequent correction of surface shape via distributed peizeoelctric actuation.

  13. 36 CFR 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  14. 36 CFR 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  15. 36 CFR 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  16. Skin Cooling and Force Replication at the Ankle in Healthy Individuals: A Crossover Randomized Controlled Trial

    PubMed Central

    Haupenthal, Daniela Pacheco dos Santos; de Noronha, Marcos; Haupenthal, Alessandro; Ruschel, Caroline; Nunes, Guilherme S.

    2015-01-01

    Context Proprioception of the ankle is determined by the ability to perceive the sense of position of the ankle structures, as well as the speed and direction of movement. Few researchers have investigated proprioception by force-replication ability and particularly after skin cooling. Objective To analyze the ability of the ankle-dorsiflexor muscles to replicate isometric force after a period of skin cooling. Design Randomized controlled clinical trial. Setting Laboratory. Patients or Other Participants Twenty healthy individuals (10 men, 10 women; age = 26.8 ± 5.2 years, height = 171 ± 7 cm, mass = 66.8 ± 10.5 kg). Intervention(s) Skin cooling was carried out using 2 ice applications: (1) after maximal voluntary isometric contraction (MVIC) performance and before data collection for the first target force, maintained for 20 minutes; and (2) before data collection for the second target force, maintained for 10 minutes. We measured skin temperature before and after ice applications to ensure skin cooling. Main Outcome Measure(s) A load cell was placed under an inclined board for data collection, and 10 attempts of force replication were carried out for 2 values of MVIC (20%, 50%) in each condition (ice, no ice). We assessed force sense with absolute and root mean square errors (the difference between the force developed by the dorsiflexors and the target force measured with the raw data and after root mean square analysis, respectively) and variable error (the variance around the mean absolute error score). A repeated-measures multivariate analysis of variance was used for statistical analysis. Results The absolute error was greater for the ice than for the no-ice condition (F1,19 = 9.05, P = .007) and for the target force at 50% of MVIC than at 20% of MVIC (F1,19 = 26.01, P < .001). Conclusions The error was greater in the ice condition and at 50% of MVIC. Skin cooling reduced the proprioceptive ability of the ankle-dorsiflexor muscles to replicate isometric force. PMID:25761136

  17. DFT benchmark study for the oxidative addition of CH 4 to Pd. Performance of various density functionals

    NASA Astrophysics Data System (ADS)

    de Jong, G. Theodoor; Geerke, Daan P.; Diefenbach, Axel; Matthias Bickelhaupt, F.

    2005-06-01

    We have evaluated the performance of 24 popular density functionals for describing the potential energy surface (PES) of the archetypal oxidative addition reaction of the methane C-H bond to the palladium atom by comparing the results with our recent ab initio [CCSD(T)] benchmark study of this reaction. The density functionals examined cover the local density approximation (LDA), the generalized gradient approximation (GGA), meta-GGAs as well as hybrid density functional theory. Relativistic effects are accounted for through the zeroth-order regular approximation (ZORA). The basis-set dependence of the density-functional-theory (DFT) results is assessed for the Becke-Lee-Yang-Parr (BLYP) functional using a hierarchical series of Slater-type orbital (STO) basis sets ranging from unpolarized double-ζ (DZ) to quadruply polarized quadruple-ζ quality (QZ4P). Stationary points on the reaction surface have been optimized using various GGA functionals, all of which yield geometries that differ only marginally. Counterpoise-corrected relative energies of stationary points are converged to within a few tenths of a kcal/mol if one uses the doubly polarized triple-ζ (TZ2P) basis set and the basis-set superposition error (BSSE) drops to 0.0 kcal/mol for our largest basis set (QZ4P). Best overall agreement with the ab initio benchmark PES is achieved by functionals of the GGA, meta-GGA, and hybrid-DFT type, with mean absolute errors of 1.3-1.4 kcal/mol and errors in activation energies ranging from +0.8 to -1.4 kcal/mol. Interestingly, the well-known BLYP functional compares very reasonably with an only slightly larger mean absolute error of 2.5 kcal/mol and an underestimation by -1.9 kcal/mol of the overall barrier (i.e., the difference in energy between the TS and the separate reactants). For comparison, with B3LYP we arrive at a mean absolute error of 3.8 kcal/mol and an overestimation of the overall barrier by 4.5 kcal/mol.

  18. Test Plan for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; Hair, Jason; McAndrew, Brendan; Daw, Adrian; Jennings, Donald; Rabin, Douglas

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. One of the major objectives of CLARREO is to advance the accuracy of SI traceable absolute calibration at infrared and reflected solar wavelengths. This advance is required to reach the on-orbit absolute accuracy required to allow climate change observations to survive data gaps while remaining sufficiently accurate to observe climate change to within the uncertainty of the limit of natural variability. While these capabilities exist at NIST in the laboratory, there is a need to demonstrate that it can move successfully from NIST to NASA and/or instrument vendor capabilities for future spaceborne instruments. The current work describes the test plan for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches , alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result of efforts with the SOLARIS CDS will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections. The CLARREO mission addresses the need to observe high-accuracy, long-term climate change trends and advance the accuracy of SI traceable absolute calibration. The current work describes the test plan for the SOLARIS which is the calibration demonstration system for the reflected solar portion of CLARREO. SOLARIS provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections.

  19. Spatial patterns of March and September streamflow trends in Pacific Northwest Streams, 1958-2008

    USGS Publications Warehouse

    Chang, Heejun; Jung, Il-Won; Steele, Madeline; Gannett, Marshall

    2012-01-01

    Summer streamflow is a vital water resource for municipal and domestic water supplies, irrigation, salmonid habitat, recreation, and water-related ecosystem services in the Pacific Northwest (PNW) in the United States. This study detects significant negative trends in September absolute streamflow in a majority of 68 stream-gauging stations located on unregulated streams in the PNW from 1958 to 2008. The proportion of March streamflow to annual streamflow increases in most stations over 1,000 m elevation, with a baseflow index of less than 50, while absolute March streamflow does not increase in most stations. The declining trends of September absolute streamflow are strongly associated with seven-day low flow, January–March maximum temperature trends, and the size of the basin (19–7,260 km2), while the increasing trends of the fraction of March streamflow are associated with elevation, April 1 snow water equivalent, March precipitation, center timing of streamflow, and October–December minimum temperature trends. Compared with ordinary least squares (OLS) estimated regression models, spatial error regression and geographically weighted regression (GWR) models effectively remove spatial autocorrelation in residuals. The GWR model results show spatial gradients of local R 2 values with consistently higher local R 2 values in the northern Cascades. This finding illustrates that different hydrologic landscape factors, such as geology and seasonal distribution of precipitation, also influence streamflow trends in the PNW. In addition, our spatial analysis model results show that considering various geographic factors help clarify the dynamics of streamflow trends over a large geographical area, supporting a spatial analysis approach over aspatial OLS-estimated regression models for predicting streamflow trends. Results indicate that transitional rain–snow surface water-dominated basins are likely to have reduced summer streamflow under warming scenarios. Consequently, a better understanding of the relationships among summer streamflow, precipitation, snowmelt, elevation, and geology can help water managers predict the response of regional summer streamflow to global warming.

  20. Multi-sensor calibration of low-cost magnetic, angular rate and gravity systems.

    PubMed

    Lüken, Markus; Misgeld, Berno J E; Rüschen, Daniel; Leonhardt, Steffen

    2015-10-13

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed "Integrated Posture and Activity Network by Medit Aachen" (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is Sensors 2015, 15 25920 corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°.

  1. Multi-Sensor Calibration of Low-Cost Magnetic, Angular Rate and Gravity Systems

    PubMed Central

    Lüken, Markus; Misgeld, Berno J.E.; Rüschen, Daniel; Leonhardt, Steffen

    2015-01-01

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed “Integrated Posture and Activity Network by Medit Aachen” (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°. PMID:26473873

  2. Measuring human remains in the field: Grid technique, total station, or MicroScribe?

    PubMed

    Sládek, Vladimír; Galeta, Patrik; Sosna, Daniel

    2012-09-10

    Although three-dimensional (3D) coordinates for human intra-skeletal landmarks are among the most important data that anthropologists have to record in the field, little is known about the reliability of various measuring techniques. We compared the reliability of three techniques used for 3D measurement of human remain in the field: grid technique (GT), total station (TS), and MicroScribe (MS). We measured 365 field osteometric points on 12 skeletal sequences excavated at the Late Medieval/Early Modern churchyard in Všeruby, Czech Republic. We compared intra-observer, inter-observer, and inter-technique variation using mean difference (MD), mean absolute difference (MAD), standard deviation of difference (SDD), and limits of agreement (LA). All three measuring techniques can be used when accepted error ranges can be measured in centimeters. When a range of accepted error measurable in millimeters is needed, MS offers the best solution. TS can achieve the same reliability as does MS, but only when the laser beam is accurately pointed into the center of the prism. When the prism is not accurately oriented, TS produces unreliable data. TS is more sensitive to initialization than is MS. GT measures human skeleton with acceptable reliability for general purposes but insufficiently when highly accurate skeletal data are needed. We observed high inter-technique variation, indicating that just one technique should be used when spatial data from one individual are recorded. Subadults are measured with slightly lower error than are adults. The effect of maximum excavated skeletal length has little practical significance in field recording. When MS is not available, we offer practical suggestions that can help to increase reliability when measuring human skeleton in the field. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. The reliability of three devices used for measuring vertical jump height.

    PubMed

    Nuzzo, James L; Anning, Jonathan H; Scharfenberg, Jessica M

    2011-09-01

    The purpose of this investigation was to assess the intrasession and intersession reliability of the Vertec, Just Jump System, and Myotest for measuring countermovement vertical jump (CMJ) height. Forty male and 39 female university students completed 3 maximal-effort CMJs during 2 testing sessions, which were separated by 24-48 hours. The height of the CMJ was measured from all 3 devices simultaneously. Systematic error, relative reliability, absolute reliability, and heteroscedasticity were assessed for each device. Systematic error across the 3 CMJ trials was observed within both sessions for males and females, and this was most frequently observed when the CMJ height was measured by the Vertec. No systematic error was discovered across the 2 testing sessions when the maximum CMJ heights from the 2 sessions were compared. In males, the Myotest demonstrated the best intrasession reliability (intraclass correlation coefficient [ICC] = 0.95; SEM = 1.5 cm; coefficient of variation [CV] = 3.3%) and intersession reliability (ICC = 0.88; SEM = 2.4 cm; CV = 5.3%; limits of agreement = -0.08 ± 4.06 cm). Similarly, in females, the Myotest demonstrated the best intrasession reliability (ICC = 0.91; SEM = 1.4 cm; CV = 4.5%) and intersession reliability (ICC = 0.92; SEM = 1.3 cm; CV = 4.1%; limits of agreement = 0.33 ± 3.53 cm). Additional analysis revealed that heteroscedasticity was present in the CMJ when measured from all 3 devices, indicating that better jumpers demonstrate greater fluctuations in CMJ scores across testing sessions. To attain reliable CMJ height measurements, practitioners are encouraged to familiarize athletes with the CMJ technique and then allow the athletes to complete numerous repetitions until performance plateaus, particularly if the Vertec is being used.

  4. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  5. Navigator alignment using radar scan

    DOEpatents

    Doerry, Armin W.; Marquette, Brandeis

    2016-04-05

    The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.

  6. The Absolute Magnitude of the Sun in Several Filters

    NASA Astrophysics Data System (ADS)

    Willmer, Christopher N. A.

    2018-06-01

    This paper presents a table with estimates of the absolute magnitude of the Sun and the conversions from vegamag to the AB and ST systems for several wide-band filters used in ground-based and space-based observatories. These estimates use the dustless spectral energy distribution (SED) of Vega, calibrated absolutely using the SED of Sirius, to set the vegamag zero-points and a composite spectrum of the Sun that coadds space-based observations from the ultraviolet to the near-infrared with models of the Solar atmosphere. The uncertainty of the absolute magnitudes is estimated by comparing the synthetic colors with photometric measurements of solar analogs and is found to be ∼0.02 mag. Combined with the uncertainty of ∼2% in the calibration of the Vega SED, the errors of these absolute magnitudes are ∼3%–4%. Using these SEDs, for three of the most utilized filters in extragalactic work the estimated absolute magnitudes of the Sun are M B = 5.44, M V = 4.81, and M K = 3.27 mag in the vegamag system and M B = 5.31, M V = 4.80, and M K = 5.08 mag in AB.

  7. [Absolute and relative strength-endurance of the knee flexor and extensor muscles: a reliability study using the IsoMed 2000-dynamometer].

    PubMed

    Dirnberger, J; Wiesinger, H P; Stöggl, T; Kösters, A; Müller, E

    2012-09-01

    Isokinetic devices are highly rated in strength-related performance diagnosis. A few years ago, the broad variety of existing products was extended by the IsoMed 2000-dynamometer. In order for an isokinetic device to be clinically useful, the reliability of specific applications must be established. Although there have already been single studies on this topic for the IsoMed 2000 concerning maximum strength measurements, there has been no study regarding the assessment of strength-endurance so far. The aim of the present study was to establish the reliability for various methods of quantification of strength-endurance using the IsoMed 2000. A sample of 33 healthy young subjects (age: 23.8 ± 2.6 years) participated in one familiarisation and two testing sessions, 3-4 days apart. Testing consisted of a series 30 full effort concentric extension-flexion cycles of the right knee muscles at an angular velocity of 180 °/s. Based on the parameters Peak, Torque and Work for each repetition, indices of absolute (KADabs) and relative (KADrel) strength-endurance were derived. KADabs was calculated as the mean value of all testing repetitions, KADrel was determined in two ways: on the one hand, as the percentage decrease between the first and the last 5 repetitions (KADrelA) and on the other, as the negative slope derived from the linear regression equitation of all repetitions (KADrelB). Detection of systematic errors was performed using paired sample t-tests, relative and absolute reliability were examined using intraclass correlation coefficient (ICC 2.1) and standard error of measurement (SEM%), respectively. In general, for extension measurements concerning KADabs and - in an weakened form - KADrel high ICC -values of 0.76-0.89 combined with clinically acceptable values of SEM% of 1.2-5.9 % could be found. For flexion measurements this only applies to KADabs, whereas results for KADrel turned out to be clearly weaker with ICC- and SEM% values of 0.42-0.62 and 9.6-17.7 % and leave considerable doubts on the clinical usefulness. However, if there should be after all a need to measure KADrel for flexion, it is - in view of the stronger reliability results - recommended (i) to concentrate on the calculation of KADrelB, (ii) to use the parameter Work and - in view of considerable familiariszation and learning effects of ≈10 % - (iii) to include a familiarisation period that extends exceeds the familiarisation session conducted in the present study. Georg Thieme Verlag KG Stuttgart · New York.

  8. Toward Biopredictive Dissolution for Enteric Coated Dosage Forms.

    PubMed

    Al-Gousous, J; Amidon, G L; Langguth, P

    2016-06-06

    The aim of this work was to develop a phosphate buffer based dissolution method for enteric-coated formulations with improved biopredictivity for fasted conditions. Two commercially available enteric-coated aspirin products were used as model formulations (Aspirin Protect 300 mg, and Walgreens Aspirin 325 mg). The disintegration performance of these products in a physiological 8 mM pH 6.5 bicarbonate buffer (representing the conditions in the proximal small intestine) was used as a standard to optimize the employed phosphate buffer molarity. To account for the fact that a pH and buffer molarity gradient exists along the small intestine, the introduction of such a gradient was proposed for products with prolonged lag times (when it leads to a release lower than 75% in the first hour post acid stage) in the proposed buffer. This would allow the method also to predict the performance of later-disintegrating products. Dissolution performance using the accordingly developed method was compared to that observed when using two well-established dissolution methods: the United States Pharmacopeia (USP) method and blank fasted state simulated intestinal fluid (FaSSIF). The resulting dissolution profiles were convoluted using GastroPlus software to obtain predicted pharmacokinetic profiles. A pharmacokinetic study on healthy human volunteers was performed to evaluate the predictions made by the different dissolution setups. The novel method provided the best prediction, by a relatively wide margin, for the difference between the lag times of the two tested formulations, indicating its being able to predict the post gastric emptying onset of drug release with reasonable accuracy. Both the new and the blank FaSSIF methods showed potential for establishing in vitro-in vivo correlation (IVIVC) concerning the prediction of Cmax and AUC0-24 (prediction errors not more than 20%). However, these predictions are strongly affected by the highly variable first pass metabolism necessitating the evaluation of an absorption rate metric that is more independent of the first-pass effect. The Cmax/AUC0-24 ratio was selected for this purpose. Regarding this metric's predictions, the new method provided very good prediction of the two products' performances relative to each other (only 1.05% prediction error in this regard), while its predictions for the individual products' values in absolute terms were borderline, narrowly missing the regulatory 20% prediction error limits (21.51% for Aspirin Protect and 22.58% for Walgreens Aspirin). The blank FaSSIF-based method provided good Cmax/AUC0-24 ratio prediction, in absolute terms, for Aspirin Protect (9.05% prediction error), but its prediction for Walgreens Aspirin (33.97% prediction error) was overwhelmingly poor. Thus it gave practically the same average but much higher maximum prediction errors compared to the new method, and it was strongly overdiscriminating as for predicting their performances relative to one another. The USP method, despite not being overdiscriminating, provided poor predictions of the individual products' Cmax/AUC0-24 ratios. This indicates that, overall, the new method is of improved biopredictivity compared to established methods.

  9. Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.

    PubMed

    Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman

    2013-02-01

    This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.

  10. Effect of proprioception training on knee joint position sense in female team handball players.

    PubMed

    Pánics, G; Tállay, A; Pavlik, A; Berkes, I

    2008-06-01

    A number of studies have shown that proprioception training can reduce the risk of injuries in pivoting sports, but the mechanism is not clearly understood. To determine the contributing effects of propioception on knee joint position sense among team handball players. Prospective cohort study. Two professional female handball teams were followed prospectively for the 2005-6 season. 20 players in the intervention team followed a prescribed proprioceptive training programme while 19 players in the control team did not have a specific propioceptive training programme. The coaches recorded all exposures of the individual players. The location and nature of injuries were recorded. Joint position sense (JPS) was measured by a goniometer on both knees in three angle intervals, testing each angle five times. Assessments were performed before and after the season by the same examiner for both teams. In the intervention team a third assessment was also performed during the season. Complete data were obtained for 15 subjects in the intervention team and 16 in the control team. Absolute error score, error of variation score and SEM were calculated and the results of the intervention and control teams were compared. The proprioception sensory function of the players in the intervention team was significantly improved between the assessments made at the start and the end of the season (mean (SD) absolute error 9.78-8.21 degrees (7.19-6.08 degrees ) vs 3.61-4.04 degrees (3.71-3.20 degrees ), p<0.05). No improvement was seen in the sensory function in the control team between the start and the end of the season (mean (SD) absolute error 6.31-6.22 degrees (6.12-3.59 degrees ) vs 6.13-6.69 degrees (7.46-6.49 degrees ), p>0.05). This is the first study to show that proprioception training improves the joint position sense in elite female handball players. This may explain the effect of neuromuscular training in reducing the injury rate.

  11. Calculating tumor trajectory and dose-of-the-day using cone-beam CT projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Bernard L., E-mail: bernard.jones@ucdenver.edu; Westerly, David; Miften, Moyed

    2015-02-15

    Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. The authors developed and validated a method which uses these projections to determine the trajectory of and dose to highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, the trajectory of which mimicked a lung tumor with high amplitude (up to 2.5 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each CBCT projection, and a Gaussian probability density function for themore » absolute BB position was calculated which best fit the observed trajectory of the BB in the imager geometry. Two modifications of the trajectory reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation (Phase), and second, using the Monte Carlo (MC) method to sample the estimated Gaussian tumor position distribution. The accuracies of the proposed methods were evaluated by comparing the known and calculated BB trajectories in phantom-simulated clinical scenarios using abdominal tumor volumes. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square trajectory errors averaged 3.8% ± 1.1% of the marker amplitude. Dosimetric calculations using Phase methods were more accurate, with mean absolute error less than 0.5%, and with error less than 1% in the highest-noise trajectory. MC-based trajectories prevent the overestimation of dose, but when viewed in an absolute sense, add a small amount of dosimetric error (<0.1%). Conclusions: Marker trajectory and target dose-of-the-day were accurately calculated using CBCT projections. This technique provides a method to evaluate highly mobile tumors using ordinary CBCT data, and could facilitate better strategies to mitigate or compensate for motion during stereotactic body radiotherapy.« less

  12. Investigation of advanced phase-shifting projected fringe profilometry techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu

    1999-11-01

    The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.

  13. Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment

    NASA Technical Reports Server (NTRS)

    Orient, O. J.; Srivastava, S. K.

    1985-01-01

    A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.

  14. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  15. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  16. Comparison of total energy expenditure assessed by two devices in controlled and free-living conditions.

    PubMed

    Rousset, Sylvie; Fardet, Anthony; Lacomme, Philippe; Normand, Sylvie; Montaurier, Christophe; Boirie, Yves; Morio, Béatrice

    2015-01-01

    The objective of this study was to evaluate the validity of total energy expenditure (TEE) provided by Actiheart and Armband. Normal-weight adult volunteers wore both devices either for 17 hours in a calorimetric chamber (CC, n = 49) or for 10 days in free-living conditions (FLC) outside the laboratory (n = 41). The two devices and indirect calorimetry or doubly labelled water, respectively, were used to estimate TEE in the CC group and FLC group. In the CC, the relative value of TEE error was not significant (p > 0.05) for Actiheart but significantly different from zero for Armband, showing TEE underestimation (-4.9%, p < 0.0001). However, the mean absolute values of errors were significantly different between Actiheart and Armband: 8.6% and 6.7%, respectively (p = 0.05). Armband was more accurate for estimating TEE during sleeping, rest, recovery periods and sitting-standing. Actiheart provided better estimation during step and walking. In FLC, no significant error in relative value was detected. Nevertheless, Armband produced smaller errors in absolute value than Actiheart (8.6% vs. 12.8%). The distributions of differences were more scattered around the means, suggesting a higher inter-individual variability in TEE estimated by Actiheart than by Armband. Our results show that both monitors are appropriate for estimating TEE. Armband is more effective than Actiheart at the individual level for daily light-intensity activities.

  17. Experimental verification of stopping-power prediction from single- and dual-energy computed tomography in biological tissues

    NASA Astrophysics Data System (ADS)

    Möhler, Christian; Russ, Tom; Wohlfahrt, Patrick; Elter, Alina; Runz, Armin; Richter, Christian; Greilich, Steffen

    2018-01-01

    An experimental setup for consecutive measurement of ion and x-ray absorption in tissue or other materials is introduced. With this setup using a 3D-printed sample container, the reference stopping-power ratio (SPR) of materials can be measured with an uncertainty of below 0.1%. A total of 65 porcine and bovine tissue samples were prepared for measurement, comprising five samples each of 13 tissue types representing about 80% of the total body mass (three different muscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone). Using a standard stoichiometric calibration for single-energy CT (SECT) as well as a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all tissues and then compared to the measured reference. With the SECT approach, the SPRs of all tissues were predicted with a mean error of (-0.84  ±  0.12)% and a mean absolute error of (1.27  ±  0.12)%. In contrast, the DECT-based SPR predictions were overall consistent with the measured reference with a mean error of (-0.02  ±  0.15)% and a mean absolute error of (0.10  ±  0.15)%. Thus, in this study, the potential of DECT to decrease range uncertainty could be confirmed in biological tissue.

  18. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    PubMed

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  19. Long-term neuromuscular training and ankle joint position sense.

    PubMed

    Kynsburg, A; Pánics, G; Halasi, T

    2010-06-01

    Preventive effect of proprioceptive training is proven by decreasing injury incidence, but its proprioceptive mechanism is not. Major hypothesis: the training has a positive long-term effect on ankle joint position sense in athletes of a high-risk sport (handball). Ten elite-level female handball-players represented the intervention group (training-group), 10 healthy athletes of other sports formed the control-group. Proprioceptive training was incorporated into the regular training regimen of the training-group. Ankle joint position sense function was measured with the "slope-box" test, first described by Robbins et al. Testing was performed one day before the intervention and 20 months later. Mean absolute estimate errors were processed for statistical analysis. Proprioceptive sensory function improved regarding all four directions with a high significance (p<0.0001; avg. mean estimate error improvement: 1.77 degrees). This was also highly significant (p< or =0.0002) in each single directions, with avg. mean estimate error improvement between 1.59 degrees (posterior) and 2.03 degrees (anterior). Mean absolute estimate errors at follow-up (2.24 degrees +/-0.88 degrees) were significantly lower than in uninjured controls (3.29 degrees +/-1.15 degrees) (p<0.0001). Long-term neuromuscular training has improved ankle joint position sense function in the investigated athletes. This joint position sense improvement can be one of the explanations for injury rate reduction effect of neuromuscular training.

  20. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  1. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  2. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  3. Using lean "automation with a human touch" to improve medication safety: a step closer to the "perfect dose".

    PubMed

    Ching, Joan M; Williams, Barbara L; Idemoto, Lori M; Blackmore, C Craig

    2014-08-01

    Virginia Mason Medical Center (Seattle) employed the Lean concept of Jidoka (automation with a human touch) to plan for and deploy bar code medication administration (BCMA) to hospitalized patients. Integrating BCMA technology into the nursing work flow with minimal disruption was accomplished using three steps ofJidoka: (1) assigning work to humans and machines on the basis of their differing abilities, (2) adapting machines to the human work flow, and (3) monitoring the human-machine interaction. Effectiveness of BCMA to both reinforce safe administration practices and reduce medication errors was measured using the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study methodology. Trained nurses observed a total of 16,149 medication doses for 3,617 patients in a three-year period. Following BCMA implementation, the number of safe practice violations decreased from 54.8 violations/100 doses (January 2010-September 2011) to 29.0 violations/100 doses (October 2011-December 2012), resulting in an absolute risk reduction of 25.8 violations/100 doses (95% confidence interval [CI]: 23.7, 27.9, p < .001). The number of medication errors decreased from 5.9 errors/100 doses at baseline to 3.0 errors/100 doses after BCMA implementation (absolute risk reduction: 2.9 errors/100 doses [95% CI: 2.2, 3.6,p < .001]). The number of unsafe administration practices (estimate, -5.481; standard error 1.133; p < .001; 95% CI: -7.702, -3.260) also decreased. As more hospitals respond to health information technology meaningful use incentives, thoughtful, methodical, and well-managed approaches to technology deployment are crucial. This work illustrates how Jidoka offers opportunities for a smooth transition to new technology.

  4. Ultrasonographic Fetal Weight Estimation: Should Macrosomia-Specific Formulas Be Utilized?

    PubMed

    Porter, Blake; Neely, Cherry; Szychowski, Jeff; Owen, John

    2015-08-01

    This study aims to derive an estimated fetal weight (EFW) formula in macrosomic fetuses, compare its accuracy to the 1986 Hadlock IV formula, and assess whether including maternal diabetes (MDM) improves estimation. Retrospective review of nonanomalous live-born singletons with birth weight (BWT) ≥ 4 kg and biometry within 14 days of birth. Formula accuracy included: (1) mean error (ME = EFW - BWT), (2) absolute mean error (AME = absolute value of [1]), and (3) mean percent error (MPE, [1]/BWT × 100%). Using loge BWT as the dependent variable, multivariable linear regression produced a macrosomic-specific formula in a "training" dataset which was verified by "validation" data. Formulas specific for MDM were also developed. Out of the 403 pregnancies, birth gestational age was 39.5 ± 1.4 weeks, and median BWT was 4,240 g. The macrosomic formula from the training data (n = 201) had associated ME = 54 ± 284 g, AME = 234 ± 167 g, and MPE = 1.6 ± 6.2%; evaluation in the validation dataset (n = 202) showed similar errors. The Hadlock formula had associated ME = -369 ± 422 g, AME = 451 ± 332 g, MPE = -8.3 ± 9.3% (all p < 0.0001). Diabetes-specific formula errors were similar to the macrosomic formula errors (all p = NS). With BWT ≥ 4 kg, the macrosomic formula was significantly more accurate than Hadlock IV, which systematically underestimates fetal/BWT. Diabetes-specific formulas did not improve accuracy. A specific formula should be considered when macrosomia is suspected. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. Evaluation of a new electronic preoperative reference marker for toric intraocular lens implantation by two different methods of analysis: Adobe Photoshop versus iTrace.

    PubMed

    Farooqui, Javed Hussain; Sharma, Mansi; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo

    2017-01-01

    The aim of this study is to compare two different methods of analysis of preoperative reference marking for toric intraocular lens (IOL) after marking with an electronic marker. Cataract and IOL Implantation Service, Shroff Eye Centre, New Delhi, India. Fifty-two eyes of thirty patients planned for toric IOL implantation were included in the study. All patients had preoperative marking performed with an electronic preoperative two-step toric IOL reference marker (ASICO AE-2929). Reference marks were placed at 3-and 9-o'clock positions. Marks were analyzed with two systems. First, slit-lamp photographs taken and analyzed using Adobe Photoshop (version 7.0). Second, Tracey iTrace Visual Function Analyzer (version 5.1.1) was used for capturing corneal topograph examination and position of marks noted. Amount of alignment error was calculated. Mean absolute rotation error was 2.38 ± 1.78° by Photoshop and 2.87 ± 2.03° by iTrace which was not statistically significant ( P = 0.215). Nearly 72.7% of eyes by Photoshop and 61.4% by iTrace had rotation error ≤3° ( P = 0.359); and 90.9% of eyes by Photoshop and 81.8% by iTrace had rotation error ≤5° ( P = 0.344). No significant difference in absolute amount of rotation between eyes when analyzed by either method. Difference in reference mark positions when analyzed by two systems suggests the presence of varying cyclotorsion at different points of time. Both analysis methods showed an approximately 3° of alignment error, which could contribute to 10% loss of astigmatic correction of toric IOL. This can be further compounded by intra-operative marking errors and final placement of IOL in the bag.

  6. Effect of limbal marking prior to laser ablation on the magnitude of cyclotorsional error.

    PubMed

    Chen, Xiangjun; Stojanovic, Aleksandar; Stojanovic, Filip; Eidet, Jon Roger; Raeder, Sten; Øritsland, Haakon; Utheim, Tor Paaske

    2012-05-01

    To evaluate the residual registration error after limbal-marking-based manual adjustment in cyclotorsional tracker-controlled laser refractive surgery. Two hundred eyes undergoing custom surface ablation with the iVIS Suite (iVIS Technologies) were divided into limbal marked (marked) and non-limbal marked (unmarked) groups. Iris registration information was acquired preoperatively from all eyes. Preoperatively, the horizontal axis was recorded in the marked group for use in manual cyclotorsional alignment prior to surgical iris registration. During iris registration, the preoperative iris information was compared to the eye-tracker captured image. The magnitudes of the registration error angle and cyclotorsional movement during the subsequent laser ablation were recorded and analyzed. Mean magnitude of registration error angle (absolute value) was 1.82°±1.31° (range: 0.00° to 5.50°) and 2.90°±2.40° (range: 0.00° to 13.50°) for the marked and unmarked groups, respectively (P<.001). Mean magnitude of cyclotorsional movement during the laser ablation (absolute value) was 1.15°±1.34° (range: 0.00° to 7.00°) and 0.68°±0.97° (range: 0.00° to 6.00°) for the marked and unmarked groups, respectively (P=.005). Forty-six percent and 60% of eyes had registration error >2°, whereas 22% and 20% of eyes had cyclotorsional movement during ablation >2° in the marked and unmarked groups, respectively. Limbal-marking-based manual alignment prior to laser ablation significantly reduced cyclotorsional registration error. However, residual registration misalignment and cyclotorsional movements remained during ablation. Copyright 2012, SLACK Incorporated.

  7. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  8. MO-FG-303-04: A Smartphone Application for Automated Mechanical Quality Assurance of Medical Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Lee, H; Choi, K

    Purpose: The mechanical quality assurance (QA) of medical accelerators consists of a time consuming series of procedures. Since most of the procedures are done manually – e.g., checking gantry rotation angle with the naked eye using a level attached to the gantry –, it is considered to be a process with high potential for human errors. To remove the possibilities of human errors and reduce the procedure duration, we developed a smartphone application for automated mechanical QA. Methods: The preparation for the automated process was done by attaching a smartphone to the gantry facing upward. For the assessments of gantrymore » and collimator angle indications, motion sensors (gyroscope, accelerator, and magnetic field sensor) embedded in the smartphone were used. For the assessments of jaw position indicator, cross-hair centering, and optical distance indicator (ODI), an optical-image processing module using a picture taken by the high-resolution camera embedded in the smartphone was implemented. The application was developed with the Android software development kit (SDK) and OpenCV library. Results: The system accuracies in terms of angle detection error and length detection error were < 0.1° and < 1 mm, respectively. The mean absolute error for gantry and collimator rotation angles were 0.03° and 0.041°, respectively. The mean absolute error for the measured light field size was 0.067 cm. Conclusion: The automated system we developed can be used for the mechanical QA of medical accelerators with proven accuracy. For more convenient use of this application, the wireless communication module is under development. This system has a strong potential for the automation of the other QA procedures such as light/radiation field coincidence and couch translation/rotations.« less

  9. Effect of trabeculectomy on the accuracy of intraocular lens calculations in patients with open-angle glaucoma.

    PubMed

    Bae, Hyoung Won; Lee, Yun Ha; Kim, Do Wook; Lee, Taekjune; Hong, Samin; Seong, Gong Je; Kim, Chan Yun

    2016-08-01

    The objective of the study is to examine the effect of trabeculectomy on intraocular lens power calculations in patients with open-angle glaucoma (OAG) undergoing cataract surgery. The design is retrospective data analysis. There are a total of 55 eyes of 55 patients with OAG who had a cataract surgery alone or in combination with trabeculectomy. We classified OAG subjects into the following groups based on surgical history: only cataract surgery (OC group), cataract surgery after prior trabeculectomy (CAT group), and cataract surgery performed in combination with trabeculectomy (CCT group). Differences between actual and predicted postoperative refractive error. Mean error (ME, difference between postoperative and predicted SE) in the CCT group was significantly lower (towards myopia) than that of the OC group (P = 0.008). Additionally, mean absolute error (MAE, absolute value of ME) in the CAT group was significantly greater than in the OC group (P = 0.006). Using linear mixed models, the ME calculated with the SRK II formula was more accurate than the ME predicted by the SRK T formula in the CAT (P = 0.032) and CCT (P = 0.035) groups. The intraocular lens power prediction accuracy was lower in the CAT and CCT groups than in the OC group. The prediction error was greater in the CAT group than in the OC group, and the direction of the prediction error tended to be towards myopia in the CCT group. The SRK II formula may be more accurate in predicting residual refractive error in the CAT and CCT groups. © 2016 Royal Australian and New Zealand College of Ophthalmologists.

  10. Application of the hybrid ANFIS models for long term wind power density prediction with extrapolation capability.

    PubMed

    Hossain, Monowar; Mekhilef, Saad; Afifi, Firdaus; Halabi, Laith M; Olatomiwa, Lanre; Seyedmahmoudian, Mehdi; Horan, Ben; Stojcevski, Alex

    2018-01-01

    In this paper, the suitability and performance of ANFIS (adaptive neuro-fuzzy inference system), ANFIS-PSO (particle swarm optimization), ANFIS-GA (genetic algorithm) and ANFIS-DE (differential evolution) has been investigated for the prediction of monthly and weekly wind power density (WPD) of four different locations named Mersing, Kuala Terengganu, Pulau Langkawi and Bayan Lepas all in Malaysia. For this aim, standalone ANFIS, ANFIS-PSO, ANFIS-GA and ANFIS-DE prediction algorithm are developed in MATLAB platform. The performance of the proposed hybrid ANFIS models is determined by computing different statistical parameters such as mean absolute bias error (MABE), mean absolute percentage error (MAPE), root mean square error (RMSE) and coefficient of determination (R2). The results obtained from ANFIS-PSO and ANFIS-GA enjoy higher performance and accuracy than other models, and they can be suggested for practical application to predict monthly and weekly mean wind power density. Besides, the capability of the proposed hybrid ANFIS models is examined to predict the wind data for the locations where measured wind data are not available, and the results are compared with the measured wind data from nearby stations.

  11. Time assignment system and its performance aboard the Hitomi satellite

    NASA Astrophysics Data System (ADS)

    Terada, Yukikatsu; Yamaguchi, Sunao; Sugimoto, Shigenobu; Inoue, Taku; Nakaya, Souhei; Murakami, Maika; Yabe, Seiya; Oshimizu, Kenya; Ogawa, Mina; Dotani, Tadayasu; Ishisaki, Yoshitaka; Mizushima, Kazuyo; Kominato, Takashi; Mine, Hiroaki; Hihara, Hiroki; Iwase, Kaori; Kouzu, Tomomi; Tashiro, Makoto S.; Natsukari, Chikara; Ozaki, Masanobu; Kokubun, Motohide; Takahashi, Tadayuki; Kawakami, Satoko; Kasahara, Masaru; Kumagai, Susumu; Angelini, Lorella; Witthoeft, Michael

    2018-01-01

    Fast timing capability in x-ray observation of astrophysical objects is one of the key properties for the ASTRO-H (Hitomi) mission. Absolute timing accuracies of 350 or 35 μs are required to achieve nominal scientific goals or to study fast variabilities of specific sources. The satellite carries a GPS receiver to obtain accurate time information, which is distributed from the central onboard computer through the large and complex SpaceWire network. The details of the time system on the hardware and software design are described. In the distribution of the time information, the propagation delays and jitters affect the timing accuracy. Six other items identified within the timing system will also contribute to absolute time error. These error items have been measured and checked on ground to ensure the time error budgets meet the mission requirements. The overall timing performance in combination with hardware performance, software algorithm, and the orbital determination accuracies, etc. under nominal conditions satisfies the mission requirements of 35 μs. This work demonstrates key points for space-use instruments in hardware and software designs and calibration measurements for fine timing accuracy on the order of microseconds for midsized satellites using the SpaceWire (IEEE1355) network.

  12. Hybrid methodology for tuberculosis incidence time-series forecasting based on ARIMA and a NAR neural network.

    PubMed

    Wang, K W; Deng, C; Li, J P; Zhang, Y Y; Li, X Y; Wu, M C

    2017-04-01

    Tuberculosis (TB) affects people globally and is being reconsidered as a serious public health problem in China. Reliable forecasting is useful for the prevention and control of TB. This study proposes a hybrid model combining autoregressive integrated moving average (ARIMA) with a nonlinear autoregressive (NAR) neural network for forecasting the incidence of TB from January 2007 to March 2016. Prediction performance was compared between the hybrid model and the ARIMA model. The best-fit hybrid model was combined with an ARIMA (3,1,0) × (0,1,1)12 and NAR neural network with four delays and 12 neurons in the hidden layer. The ARIMA-NAR hybrid model, which exhibited lower mean square error, mean absolute error, and mean absolute percentage error of 0·2209, 0·1373, and 0·0406, respectively, in the modelling performance, could produce more accurate forecasting of TB incidence compared to the ARIMA model. This study shows that developing and applying the ARIMA-NAR hybrid model is an effective method to fit the linear and nonlinear patterns of time-series data, and this model could be helpful in the prevention and control of TB.

  13. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  14. Improved tests of extra-dimensional physics and thermal quantum field theory from new Casimir force measurements

    NASA Astrophysics Data System (ADS)

    Decca, R. S.; Fischbach, E.; Klimchitskaya, G. L.; Krause, D. E.; López, D.; Mostepanenko, V. M.

    2003-12-01

    We report new constraints on extra-dimensional models and other physics beyond the standard model based on measurements of the Casimir force between two dissimilar metals for separations in the range 0.2 1.2 μm. The Casimir force between a Au-coated sphere and a Cu-coated plate of a microelectromechanical torsional oscillator was measured statically with an absolute error of 0.3 pN. In addition, the Casimir pressure between two parallel plates was determined dynamically with an absolute error of ≈0.6 mPa. Within the limits of experimental and theoretical errors, the results are in agreement with a theory that takes into account the finite conductivity and roughness of the two metals. The level of agreement between experiment and theory was then used to set limits on the predictions of extra-dimensional physics and thermal quantum field theory. It is shown that two theoretical approaches to the thermal Casimir force which predict effects linear in temperature are ruled out by these experiments. Finally, constraints on Yukawa corrections to Newton’s law of gravity are strengthened by more than an order of magnitude in the range 56 330 nm.

  15. Model assessment using a multi-metric ranking technique

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  16. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    PubMed

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  17. Application of the hybrid ANFIS models for long term wind power density prediction with extrapolation capability

    PubMed Central

    Mekhilef, Saad; Afifi, Firdaus; Halabi, Laith M.; Olatomiwa, Lanre; Seyedmahmoudian, Mehdi; Stojcevski, Alex

    2018-01-01

    In this paper, the suitability and performance of ANFIS (adaptive neuro-fuzzy inference system), ANFIS-PSO (particle swarm optimization), ANFIS-GA (genetic algorithm) and ANFIS-DE (differential evolution) has been investigated for the prediction of monthly and weekly wind power density (WPD) of four different locations named Mersing, Kuala Terengganu, Pulau Langkawi and Bayan Lepas all in Malaysia. For this aim, standalone ANFIS, ANFIS-PSO, ANFIS-GA and ANFIS-DE prediction algorithm are developed in MATLAB platform. The performance of the proposed hybrid ANFIS models is determined by computing different statistical parameters such as mean absolute bias error (MABE), mean absolute percentage error (MAPE), root mean square error (RMSE) and coefficient of determination (R2). The results obtained from ANFIS-PSO and ANFIS-GA enjoy higher performance and accuracy than other models, and they can be suggested for practical application to predict monthly and weekly mean wind power density. Besides, the capability of the proposed hybrid ANFIS models is examined to predict the wind data for the locations where measured wind data are not available, and the results are compared with the measured wind data from nearby stations. PMID:29702645

  18. Absolute vs. Weight-Related Maximum Oxygen Uptake in Firefighters: Fitness Evaluation with and without Protective Clothing and Self-Contained Breathing Apparatus among Age Group

    PubMed Central

    Perroni, Fabrizio; Guidetti, Laura; Cignitti, Lamberto; Baldari, Carlo

    2015-01-01

    During fire emergencies, firefighters wear personal protective devices (PC) and a self-contained breathing apparatus (S.C.B.A.) to be protected from injuries. The purpose of this study was to investigate the differences of aerobic level in 197 firefighters (age: 34±7 yr; BMI: 24.4±2.3 kg.m-2), evaluated by a Queen’s College Step field Test (QCST), performed with and without fire protective garments, and to analyze the differences among age groups (<25 yr; 26-30 yr, 31-35 yr, 36-40 yr and >40 yr). Variance analysis was applied to assess differences (p < 0.05) between tests and age groups observed in absolute and weight-related values, while a correlation was examined between QCST with and without PC+S.C.B.A. The results have shown that a 13% of firefighters failed to complete the test with PC+S.C.B.A. and significant differences between QCST performed with and without PC+S.C.B.A. in absolute (F(1,169) = 42.6, p < 0.0001) and weight-related (F(1,169) = 339.9, p < 0.0001) terms. A better correlation has been found in L•min-1 (r=0.67) than in ml•kg-1•min-1 (r=0.54). Moreover, we found significant differences among age groups both in absolute and weight-related values. The assessment of maximum oxygen uptake of firefighters in absolute term can be a useful tool to evaluate the firefighters' cardiovascular strain. PMID:25764201

  19. 36 CFR § 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  20. The importance of intra-hospital pharmacovigilance in the detection of medication errors

    PubMed

    Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio

    2018-01-01

    Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.

  1. Finite element modelling versus classic beam theory: comparing methods for stress estimation in a morphologically diverse sample of vertebrate long bones

    PubMed Central

    Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.

    2013-01-01

    Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199

  2. Optimal design of the absolute positioning sensor for a high-speed maglev train and research on its fault diagnosis.

    PubMed

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  3. A system to use electromagnetic tracking for the quality assurance of brachytherapy catheter digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.

    2014-10-15

    Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less

  4. Quantifying uncertainty in coral Sr/Ca-based SST estimates from Orbicella faveolata: A basis for multi-colony SST reconstructions

    NASA Astrophysics Data System (ADS)

    Richey, J. N.; Flannery, J. A.; Toth, L. T.; Kuffner, I. B.; Poore, R. Z.

    2017-12-01

    The Sr/Ca in massive corals can be used as a proxy for sea surface temperature (SST) in shallow tropical to sub-tropical regions; however, the relationship between Sr/Ca and SST varies throughout the ocean, between different species of coral, and often between different colonies of the same species. We aimed to quantify the uncertainty associated with the Sr/Ca-SST proxy due to sample handling (e.g., micro-drilling or analytical error), vital effects (e.g., among-colony differences in coral growth), and local-scale variability in microhabitat. We examine the intra- and inter-colony reproducibility of Sr/Ca records extracted from five modern Orbicella faveolata colonies growing in the Dry Tortugas, Florida, USA. The average intra-colony absolute difference (AD) in Sr/Ca of the five colonies during an overlapping interval (1997-2008) was 0.055 ± 0.044 mmol mol-1 (0.96 ºC) and the average inter-colony Sr/Ca AD was 0.039 ± 0.01 mmol mol-1 (0.51 ºC). All available Sr/Ca-SST data pairs from 1997-2008 were combined and regressed against the HadISST1 gridded SST data set (24 ºN and 82 ºW) to produce a calibration equation that could be applied to O. faveolata specimens from throughout the Gulf of Mexico/Caribbean/Atlantic region after accounting for the potential uncertainties in Sr/Ca-derived SSTs. We quantified a combined error term for O. faveolata using the root-sum-square (RMS) of the analytical, intra-, and inter-colony uncertainties and suggest that an overall uncertainty of 0.046 mmol mol-1 (0.81 ºC, 1σ), should be used to interpret Sr/Ca records from O. faveolata specimens of unknown age or origin to reconstruct SST. We also explored how uncertainty is affected by the number of corals used in a reconstruction by iteratively calculating the RMS error for composite coral time-series using two, three, four, and five overlapping coral colonies. Our results indicate that maximum RMS error at the 95% confidence interval on mean annual SST estimates is 1.4 ºC when a composite record is made from only two overlapping coral Sr/Ca records. The uncertainty decreases as additional coral Sr/Ca data are added, with a maximum RMS error of 0.5 ºC on mean annual SST for a five-colony composite. To reduce uncertainty to under 1 ºC, it is best to use Sr/Ca from three or more coral colonies from the same geographic location and time period.

  5. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  6. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  7. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  8. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  9. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  10. Method for the fabrication error calibration of the CGH used in the cylindrical interferometry system

    NASA Astrophysics Data System (ADS)

    Wang, Qingquan; Yu, Yingjie; Mou, Kebing

    2016-10-01

    This paper presents a method of absolutely calibrating the fabrication error of the CGH in the cylindrical interferometry system for the measurement of cylindricity error. First, a simulated experimental system is set up in ZEMAX. On one hand, the simulated experimental system has demonstrated the feasibility of the method we proposed. On the other hand, by changing the different positions of the mirror in the simulated experimental system, a misalignment aberration map, consisting of the different interferograms in different positions, is acquired. And it can be acted as a reference for the experimental adjustment in real system. Second, the mathematical polynomial, which describes the relationship between the misalignment aberrations and the possible misalignment errors, is discussed.

  11. Analysis of a range estimator which uses MLS angle measurements

    NASA Technical Reports Server (NTRS)

    Downing, David R.; Linse, Dennis

    1987-01-01

    A concept that uses the azimuth signal from a microwave landing system (MLS) combined with onboard airspeed and heading data to estimate the horizontal range to the runway threshold is investigated. The absolute range error is evaluated for trajectories typical of General Aviation (GA) and commercial airline operations (CAO). These include constant intercept angles for GA and CAO, and complex curved trajectories for CAO. It is found that range errors of 4000 to 6000 feet at the entry of MLS coverage which then reduce to 1000-foot errors at runway centerline intercept are possible for GA operations. For CAO, errors at entry into MLS coverage of 2000 feet which reduce to 300 feet at runway centerline interception are possible.

  12. An online detection system for aggregate sizes and shapes based on digital image processing

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Chen, Sijia

    2017-02-01

    Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.

  13. High variability in strain estimation errors when using a commercial ultrasound speckle tracking algorithm on tendon tissue.

    PubMed

    Fröberg, Åsa; Mårtensson, Mattias; Larsson, Matilda; Janerot-Sjöberg, Birgitta; D'Hooge, Jan; Arndt, Anton

    2016-10-01

    Ultrasound speckle tracking offers a non-invasive way of studying strain in the free Achilles tendon where no anatomical landmarks are available for tracking. This provides new possibilities for studying injury mechanisms during sport activity and the effects of shoes, orthotic devices, and rehabilitation protocols on tendon biomechanics. To investigate the feasibility of using a commercial ultrasound speckle tracking algorithm for assessing strain in tendon tissue. A polyvinyl alcohol (PVA) phantom, three porcine tendons, and a human Achilles tendon were mounted in a materials testing machine and loaded to 4% peak strain. Ultrasound long-axis cine-loops of the samples were recorded. Speckle tracking analysis of axial strain was performed using a commercial speckle tracking software. Estimated strain was then compared to reference strain known from the materials testing machine. Two frame rates and two region of interest (ROI) sizes were evaluated. Best agreement between estimated strain and reference strain was found in the PVA phantom (absolute error in peak strain: 0.21 ± 0.08%). The absolute error in peak strain varied between 0.72 ± 0.65% and 10.64 ± 3.40% in the different tendon samples. Strain determined with a frame rate of 39.4 Hz had lower errors than 78.6 Hz as was the case with a 22 mm compared to an 11 mm ROI. Errors in peak strain estimation showed high variability between tendon samples and were large in relation to strain levels previously described in the Achilles tendon. © The Foundation Acta Radiologica 2016.

  14. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  15. A novel geometry-dosimetry label fusion method in multi-atlas segmentation for radiotherapy: a proof-of-concept study

    NASA Astrophysics Data System (ADS)

    Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B.

    2017-05-01

    Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.

  16. A novel geometry-dosimetry label fusion method in multi-atlas segmentation for radiotherapy: a proof-of-concept study.

    PubMed

    Chang, Jina; Tian, Zhen; Lu, Weiguo; Gu, Xuejun; Chen, Mingli; Jiang, Steve B

    2017-05-07

    Multi-atlas segmentation (MAS) has been widely used to automate the delineation of organs at risk (OARs) for radiotherapy. Label fusion is a crucial step in MAS to cope with the segmentation variabilities among multiple atlases. However, most existing label fusion methods do not consider the potential dosimetric impact of the segmentation result. In this proof-of-concept study, we propose a novel geometry-dosimetry label fusion method for MAS-based OAR auto-contouring, which evaluates the segmentation performance in terms of both geometric accuracy and the dosimetric impact of the segmentation accuracy on the resulting treatment plan. Differently from the original selective and iterative method for performance level estimation (SIMPLE), we evaluated and rejected the atlases based on both Dice similarity coefficient and the predicted error of the dosimetric endpoints. The dosimetric error was predicted using our previously developed geometry-dosimetry model. We tested our method in MAS-based rectum auto-contouring on 20 prostate cancer patients. The accuracy in the rectum sub-volume close to the planning tumor volume (PTV), which was found to be a dosimetric sensitive region of the rectum, was greatly improved. The mean absolute distance between the obtained contour and the physician-drawn contour in the rectum sub-volume 2 mm away from PTV was reduced from 3.96 mm to 3.36 mm on average for the 20 patients, with the maximum decrease found to be from 9.22 mm to 3.75 mm. We also compared the dosimetric endpoints predicted for the obtained contours with those predicted for the physician-drawn contours. Our method led to smaller dosimetric endpoint errors than the SIMPLE method in 15 patients, comparable errors in 2 patients, and slightly larger errors in 3 patients. These results indicated the efficacy of our method in terms of considering both geometric accuracy and dosimetric impact during label fusion. Our algorithm can be applied to different tumor sites and radiation treatments, given a specifically trained geometry-dosimetry model.

  17. The trading time risks of stock investment in stock price drop

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Tang, Nian-Sheng; Mei, Dong-Cheng; Li, Yun-Xian; Zhang, Wan

    2016-11-01

    This article investigates the trading time risk (TTR) of stock investment in the case of stock price drop of Dow Jones Industrial Average (ˆDJI) and Hushen300 data (CSI300), respectively. The escape time of stock price from the maximum to minimum in a data window length (DWL) is employed to measure the absolute TTR, the ratio of the escape time to data window length is defined as the relative TTR. Empirical probability density functions of the absolute and relative TTRs for the ˆDJI and CSI300 data evidence that (i) whenever the DWL increases, the absolute TTR increases, the relative TTR decreases otherwise; (ii) there is the monotonicity (or non-monotonicity) for the stability of the absolute (or relative) TTR; (iii) there is a peak distribution for shorter trading days and a two-peak distribution for longer trading days for the PDF of ratio; (iv) the trading days play an opposite role on the absolute (or relative) TTR and its stability between ˆDJI and CSI300 data.

  18. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  19. Fast-match on particle swarm optimization with variant system mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yuehuang; Fang, Xin; Chen, Jie

    2018-03-01

    Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.

  20. Implementation of the incremental scheme for one-electron first-order properties in coupled-cluster theory.

    PubMed

    Friedrich, Joachim; Coriani, Sonia; Helgaker, Trygve; Dolg, Michael

    2009-10-21

    A fully automated parallelized implementation of the incremental scheme for coupled-cluster singles-and-doubles (CCSD) energies has been extended to treat molecular (unrelaxed) first-order one-electron properties such as the electric dipole and quadrupole moments. The convergence and accuracy of the incremental approach for the dipole and quadrupole moments have been studied for a variety of chemically interesting systems. It is found that the electric dipole moment can be obtained to within 5% and 0.5% accuracy with respect to the exact CCSD value at the third and fourth orders of the expansion, respectively. Furthermore, we find that the incremental expansion of the quadrupole moment converges to the exact result with increasing order of the expansion: the convergence of nonaromatic compounds is fast with errors less than 16 mau and less than 1 mau at third and fourth orders, respectively (1 mau=10(-3)ea(0)(2)); the aromatic compounds converge slowly with maximum absolute deviations of 174 and 72 mau at third and fourth orders, respectively.

  1. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015.

    PubMed

    Abatzoglou, John T; Dobrowski, Solomon Z; Parks, Sean A; Hegewisch, Katherine C

    2018-01-09

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  2. TerraClimate, a high-resolution global dataset of monthly climate and climatic water balance from 1958-2015

    NASA Astrophysics Data System (ADS)

    Abatzoglou, John T.; Dobrowski, Solomon Z.; Parks, Sean A.; Hegewisch, Katherine C.

    2018-01-01

    We present TerraClimate, a dataset of high-spatial resolution (1/24°, ~4-km) monthly climate and climatic water balance for global terrestrial surfaces from 1958-2015. TerraClimate uses climatically aided interpolation, combining high-spatial resolution climatological normals from the WorldClim dataset, with coarser resolution time varying (i.e., monthly) data from other sources to produce a monthly dataset of precipitation, maximum and minimum temperature, wind speed, vapor pressure, and solar radiation. TerraClimate additionally produces monthly surface water balance datasets using a water balance model that incorporates reference evapotranspiration, precipitation, temperature, and interpolated plant extractable soil water capacity. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time varying climate and climatic water balance data. We validated spatiotemporal aspects of TerraClimate using annual temperature, precipitation, and calculated reference evapotranspiration from station data, as well as annual runoff from streamflow gauges. TerraClimate datasets showed noted improvement in overall mean absolute error and increased spatial realism relative to coarser resolution gridded datasets.

  3. Monitoring activities of daily living based on wearable wireless body sensor network.

    PubMed

    Kańtoch, E; Augustyniak, P; Markiewicz, M; Prusak, D

    2014-01-01

    With recent advances in microprocessor chip technology, wireless communication, and biomedical engineering it is possible to develop miniaturized ubiquitous health monitoring devices that are capable of recording physiological and movement signals during daily life activities. The aim of the research is to implement and test the prototype of health monitoring system. The system consists of the body central unit with Bluetooth module and wearable sensors: the custom-designed ECG sensor, the temperature sensor, the skin humidity sensor and accelerometers placed on the human body or integrated with clothes and a network gateway to forward data to a remote medical server. The system includes custom-designed transmission protocol and remote web-based graphical user interface for remote real time data analysis. Experimental results for a group of humans who performed various activities (eg. working, running, etc.) showed maximum 5% absolute error compared to certified medical devices. The results are promising and indicate that developed wireless wearable monitoring system faces challenges of multi-sensor human health monitoring during performing daily activities and opens new opportunities in developing novel healthcare services.

  4. Determination of the absolute binding free energies of HIV-1 protease inhibitors using non-equilibrium molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Ngo, Son Tung; Nguyen, Minh Tung; Nguyen, Minh Tho

    2017-05-01

    The absolute binding free energy of an inhibitor to HIV-1 Protease (PR) was determined throughout evaluation of the non-bonded interaction energy difference between the two bound and unbound states of the inhibitor and surrounding molecules by the fast pulling of ligand (FPL) process using non-equilibrium molecular dynamics (NEMD) simulations. The calculated free energy difference terms help clarifying the nature of the binding. Theoretical binding affinities are in good correlation with experimental data, with R = 0.89. The paradigm used is able to rank two inhibitors having the maximum difference of ∼1.5 kcal/mol in absolute binding free energies.

  5. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    NASA Astrophysics Data System (ADS)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).

    We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.

    A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.

    Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.

    Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.

    The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  6. Comparison of the WSA-ENLIL model with three CME cone types

    NASA Astrophysics Data System (ADS)

    Jang, Soojeong; Moon, Y.; Na, H.

    2013-07-01

    We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.

  7. 75 FR 20813 - Certain Magnesia Carbon Bricks from the People's Republic of China: Amended Preliminary...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ... other errors, would result in (1) a change of at least five absolute percentage points in, but not less...) preliminary determination, or (2) a difference between a weighted-average dumping margin of zero or de minimis...

  8. Factor structure and psychometric properties of the english version of the trier inventory for chronic stress (TICS-E).

    PubMed

    Petrowski, Katja; Kliem, Sören; Sadler, Michael; Meuret, Alicia E; Ritz, Thomas; Brähler, Elmar

    2018-02-06

    Demands placed on individuals in occupational and social settings, as well as imbalances in personal traits and resources, can lead to chronic stress. The Trier Inventory for Chronic Stress (TICS) measures chronic stress while incorporating domain-specific aspects, and has been found to be a highly reliable and valid research tool. The aims of the present study were to confirm the German version TICS factorial structure in an English translation of the instrument (TICS-E) and to report its psychometric properties. A random route sample of healthy participants (N = 483) aged 18-30 years completed the TICS-E. The robust maximum likelihood estimation with a mean-adjusted chi-square test statistic was applied due to the sample's significant deviation from the multivariate normal distribution. Goodness of fit, absolute model fit, and relative model fit were assessed by means of the root mean square error of approximation (RMSEA), the Comparative Fit Index (CFI) and the Tucker Lewis Index (TLI). Reliability estimates (Cronbach's α and adjusted split-half reliability) ranged from .84 to .92. Item-scale correlations ranged from .50 to .85. Measures of fit showed values of .052 for RMSEA (Cl = 0.50-.054) and .067 for SRMR for absolute model fit, and values of .846 (TLI) and .855 (CFI) for relative model-fit. Factor loadings ranged from .55 to .91. The psychometric properties and factor structure of the TICS-E are comparable to the German version of the TICS. The instrument therefore meets quality standards for an adequate measurement of chronic stress.

  9. Support vector machine for day ahead electricity price forecasting

    NASA Astrophysics Data System (ADS)

    Razak, Intan Azmira binti Wan Abdul; Abidin, Izham bin Zainal; Siah, Yap Keem; Rahman, Titik Khawa binti Abdul; Lada, M. Y.; Ramani, Anis Niza binti; Nasir, M. N. M.; Ahmad, Arfah binti

    2015-05-01

    Electricity price forecasting has become an important part of power system operation and planning. In a pool- based electric energy market, producers submit selling bids consisting in energy blocks and their corresponding minimum selling prices to the market operator. Meanwhile, consumers submit buying bids consisting in energy blocks and their corresponding maximum buying prices to the market operator. Hence, both producers and consumers use day ahead price forecasts to derive their respective bidding strategies to the electricity market yet reduce the cost of electricity. However, forecasting electricity prices is a complex task because price series is a non-stationary and highly volatile series. Many factors cause for price spikes such as volatility in load and fuel price as well as power import to and export from outside the market through long term contract. This paper introduces an approach of machine learning algorithm for day ahead electricity price forecasting with Least Square Support Vector Machine (LS-SVM). Previous day data of Hourly Ontario Electricity Price (HOEP), generation's price and demand from Ontario power market are used as the inputs for training data. The simulation is held using LSSVMlab in Matlab with the training and testing data of 2004. SVM that widely used for classification and regression has great generalization ability with structured risk minimization principle rather than empirical risk minimization. Moreover, same parameter settings in trained SVM give same results that absolutely reduce simulation process compared to other techniques such as neural network and time series. The mean absolute percentage error (MAPE) for the proposed model shows that SVM performs well compared to neural network.

  10. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  11. Sensitivity analysis of linear CROW gyroscopes and comparison to a single-resonator gyroscope

    NASA Astrophysics Data System (ADS)

    Zamani-Aghaie, Kiarash; Digonnet, Michel J. F.

    2013-03-01

    This study presents numerical simulations of the maximum sensitivity to absolute rotation of a number of coupled resonator optical waveguide (CROW) gyroscopes consisting of a linear array of coupled ring resonators. It examines in particular the impact on the maximum sensitivity of the number of rings, of the relative spatial orientation of the rings (folded and unfolded), of various sequences of coupling ratios between the rings and various sequences of ring dimensions, and of the number of input/output waveguides (one or two) used to inject and collect the light. In all configurations the sensitivity is maximized by proper selection of the coupling ratio(s) and phase bias, and compared to the maximum sensitivity of a resonant waveguide optical gyroscope (RWOG) utilizing a single ring-resonator waveguide with the same radius and loss as each ring in the CROW. Simulations show that although some configurations are more sensitive than others, in spite of numerous claims to the contrary made in the literature, in all configurations the maximum sensitivity is independent of the number of rings, and does not exceed the maximum sensitivity of an RWOG. There are no sensitivity benefits to utilizing any of these linear CROWs for absolute rotation sensing. For equal total footprint, an RWOG is √N times more sensitive, and it is easier to fabricate and stabilize.

  12. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  13. SU-E-QI-21: Iodinated Contrast Agent Time Course In Human Brain Metastasis: A Study For Stereotactic Synchrotron Radiotherapy Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obeid, L; Esteve, F; Adam, J

    2014-06-15

    Purpose: Synchrotron stereotactic radiotherapy (SSRT) is an innovative treatment combining the selective accumulation of heavy elements in tumors with stereotactic irradiations using monochromatic medium energy x-rays from a synchrotron source. Phase I/II clinical trials on brain metastasis are underway using venous infusion of iodinated contrast agents. The radiation dose enhancement depends on the amount of iodine in the tumor and its time course. In the present study, the reproducibility of iodine concentrations between the CT planning scan day (Day 0) and the treatment day (Day 10) was assessed in order to predict dose errors. Methods: For each of days 0more » and 10, three patients received a biphasic intravenous injection of iodinated contrast agent (40 ml, 4 ml/s, followed by 160 ml, 0.5 ml/s) in order to ensure stable intra-tumoral amounts of iodine during the treatment. Two volumetric CT scans (before and after iodine injection) and a multi-slice dynamic CT of the brain were performed using conventional radiotherapy CT (Day 0) or quantitative synchrotron radiation CT (Day 10). A 3D rigid registration was processed between images. The absolute and relative differences of absolute iodine concentrations and their corresponding dose errors were evaluated in the GTV and PTV used for treatment planning. Results: The differences in iodine concentrations remained within the standard deviation limits. The 3D absolute differences followed a normal distribution centered at zero mg/ml with a variance (∼1 mg/ml) which is related to the image noise. Conclusion: The results suggest that dose errors depend only on the image noise. This study shows that stable amounts of iodine are achievable in brain metastasis for SSRT treatment in a 10 days interval.« less

  14. The effects of knee direction, physical activity and age on knee joint position sense.

    PubMed

    Relph, Nicola; Herrington, Lee

    2016-06-01

    Previous research has suggested a decline in knee proprioception with age. Furthermore, regular participation in physical activity may improve proprioceptive ability. However, there is no large scale data on uninjured populations to confirm these theories. The aim of this study was to provide normative knee joint position data (JPS) from healthy participants aged 18-82years to evaluate the effects of age, physical activity and knee direction. A sample of 116 participants across five age groups was used. The main outcome measures were knee JPS absolute error scores into flexion and extension, Tegner activity levels and General Practitioner Physical Activity Questionnaire results. Absolute error scores in to knee flexion were 3.6°, 3.9°, 3.5°, 3.7° and 3.1° and knee extension were 2.7°, 2.5°, 2.9°, 3.4° and 3.9° for ages 15-29, 30-44, 45-59, 60-74 and 75 years old respectively. Knee extension and flexion absolute error scores were significantly different when age group data were pooled. There was a significant effect of age and activity level on joint position sense into knee extension. Age and lower Tegner scores were also negatively correlated to joint position sense into knee extension. The results provide some evidence for a decline in knee joint position sense with age. Further, active populations may have heightened static proprioception compared to inactive groups. Normative knee joint position sense data is provided and may be used by practitioners to identify patients with reduced proprioceptive ability. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Accuracy of free energies of hydration using CM1 and CM3 atomic charges.

    PubMed

    Udier-Blagović, Marina; Morales De Tirado, Patricia; Pearlman, Shoshannah A; Jorgensen, William L

    2004-08-01

    Absolute free energies of hydration (DeltaGhyd) have been computed for 25 diverse organic molecules using partial atomic charges derived from AM1 and PM3 wave functions via the CM1 and CM3 procedures of Cramer, Truhlar, and coworkers. Comparisons are made with results using charges fit to the electrostatic potential surface (EPS) from ab initio 6-31G* wave functions and from the OPLS-AA force field. OPLS Lennard-Jones parameters for the organic molecules were used together with the TIP4P water model in Monte Carlo simulations with free energy perturbation theory. Absolute free energies of hydration were computed for OPLS united-atom and all-atom methane by annihilating the solutes in water and in the gas phase, and absolute DeltaGhyd values for all other molecules were computed via transformation to one of these references. Optimal charge scaling factors were determined by minimizing the unsigned average error between experimental and calculated hydration free energies. The PM3-based charge models do not lead to lower average errors than obtained with the EPS charges for the subset of 13 molecules in the original study. However, improvement is obtained by scaling the CM1A partial charges by 1.14 and the CM3A charges by 1.15, which leads to average errors of 1.0 and 1.1 kcal/mol for the full set of 25 molecules. The scaled CM1A charges also yield the best results for the hydration of amides including the E/Z free-energy difference for N-methylacetamide in water. Copyright 2004 Wiley Periodicals, Inc.

  16. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  17. Hepatic Blood Perfusion Estimated by Dynamic Contrast-Enhanced Computed Tomography in Pigs Limitations of the Slope Method

    PubMed Central

    Winterdahl, Michael; Sørensen, Michael; Keiding, Susanne; Mortensen, Frank V.; Alstrup, Aage K. O.; Hansen, Søren B.; Munk, Ole L.

    2012-01-01

    Objective To determine whether dynamic contrast-enhanced computed tomography (DCE-CT) and the slope method can provide absolute measures of hepatic blood perfusion from hepatic artery (HA) and portal vein (PV) at experimentally varied blood flow rates. Materials and Methods Ten anesthetized 40-kg pigs underwent DCE-CT during periods of normocapnia (normal flow), hypocapnia (decreased flow), and hypercapnia (increased flow), which was induced by adjusting the ventilation. Reference blood flows in HA and PV were measured continuously by surgically-placed ultrasound transit-time flowmeters. For each capnic condition, the DCE-CT estimated absolute hepatic blood perfusion from HA and PV were calculated using the slope method and compared with flowmeter based absolute measurements of hepatic perfusions and relative errors were analyzed. Results The relative errors (mean±SEM) of the DCE-CT based perfusion estimates were −21±23% for HA and 81±31% for PV (normocapnia), 9±23% for HA and 92±42% for PV (hypocapnia), and 64±28% for HA and −2±20% for PV (hypercapnia). The mean relative errors for HA were not significantly different from zero during hypo- and normocapnia, and the DCE-CT slope method could detect relative changes in HA perfusion between scans. Infusion of contrast agent led to significantly increased hepatic blood perfusion, which biased the PV perfusion estimates. Conclusions Using the DCE-CT slope method, HA perfusion estimates were accurate at low and normal flow rates whereas PV perfusion estimates were inaccurate and imprecise. At high flow rate, both HA perfusion estimates were significantly biased. PMID:22836307

  18. Systematic changes in position sense accompany normal aging across adulthood.

    PubMed

    Herter, Troy M; Scott, Stephen H; Dukelow, Sean P

    2014-03-25

    Development of clinical neurological assessments aimed at separating normal from abnormal capabilities requires a comprehensive understanding of how basic neurological functions change (or do not change) with increasing age across adulthood. In the case of proprioception, the research literature has failed to conclusively determine whether or not position sense in the upper limb deteriorates in elderly individuals. The present study was conducted a) to quantify whether upper limb position sense deteriorates with increasing age, and b) to generate a set of normative data that can be used for future comparisons with clinical populations. We examined position sense in 209 healthy males and females between the ages of 18 and 90 using a robotic arm position-matching task that is both objective and reliable. In this task, the robot moved an arm to one of nine positions and subjects attempted to mirror-match that position with the opposite limb. Measures of position sense were recorded by the robotic apparatus in hand-and joint-based coordinates, and linear regressions were used to quantify age-related changes and percentile boundaries of normal behaviour. For clinical comparisons, we also examined influences of sex (male versus female) and test-hand (dominant versus non-dominant) on all measures of position sense. Analyses of hand-based parameters identified several measures of position sense (Variability, Shift, Spatial Contraction, Absolute Error) with significant effects of age, sex, and test-hand. Joint-based parameters at the shoulder (Absolute Error) and elbow (Variability, Shift, Absolute Error) also exhibited significant effects of age and test-hand. The present study provides strong evidence that several measures of upper extremity position sense exhibit declines with age. Furthermore, this data provides a basis for quantifying when changes in position sense are related to normal aging or alternatively, pathology.

  19. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    NASA Astrophysics Data System (ADS)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  20. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  1. Perception of musical and lexical tones by Taiwanese-speaking musicians.

    PubMed

    Lee, Chao-Yang; Lee, Yuh-Fang; Shr, Chia-Lin

    2011-07-01

    This study explored the relationship between music and speech by examining absolute pitch and lexical tone perception. Taiwanese-speaking musicians were asked to identify musical tones without a reference pitch and multispeaker Taiwanese level tones without acoustic cues typically present for speaker normalization. The results showed that a high percentage of the participants (65% with an exact match required and 81% with one-semitone errors allowed) possessed absolute pitch, as measured by the musical tone identification task. A negative correlation was found between occurrence of absolute pitch and age of onset of musical training, suggesting that the acquisition of absolute pitch resembles the acquisition of speech. The participants were able to identify multispeaker Taiwanese level tones with above-chance accuracy, even though the acoustic cues typically present for speaker normalization were not available in the stimuli. No correlations were found between the performance in musical tone identification and the performance in Taiwanese tone identification. Potential reasons for the lack of association between the two tasks are discussed. © 2011 Acoustical Society of America

  2. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  3. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  4. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    USGS Publications Warehouse

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  5. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    PubMed Central

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  6. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering.

    PubMed

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-05-23

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.

  7. Finding Useful Questions: On Bayesian Diagnosticity, Probability, Impact, and Information Gain

    ERIC Educational Resources Information Center

    Nelson, Jonathan D.

    2005-01-01

    Several norms for how people should assess a question's usefulness have been proposed, notably Bayesian diagnosticity, information gain (mutual information), Kullback-Liebler distance, probability gain (error minimization), and impact (absolute change). Several probabilistic models of previous experiments on categorization, covariation assessment,…

  8. 3D foot shape generation from 2D information.

    PubMed

    Luximon, Ameersing; Goonetilleke, Ravindra S; Zhang, Ming

    2005-05-15

    Two methods to generate an individual 3D foot shape from 2D information are proposed. A standard foot shape was first generated and then scaled based on known 2D information. In the first method, the foot outline and the foot height were used, and in the second, the foot outline and the foot profile were used. The models were developed using 40 participants and then validated using a different set of 40 participants. Results show that each individual foot shape can be predicted within a mean absolute error of 1.36 mm for the left foot and 1.37 mm for the right foot using the first method, and within a mean absolute error of 1.02 mm for the left foot and 1.02 mm for the right foot using the second method. The second method shows somewhat improved accuracy even though it requires two images. Both the methods are relatively cheaper than using a scanner to determine the 3D foot shape for custom footwear design.

  9. LAI inversion from optical reflectance using a neural network trained with a multiple scattering model

    NASA Technical Reports Server (NTRS)

    Smith, James A.

    1992-01-01

    The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.

  10. Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.

    PubMed

    Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra

    2014-04-01

    To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A rack-mounted precision waveguide-below-cutoff attenuator with an absolute electronic readout

    NASA Technical Reports Server (NTRS)

    Cook, C. C.

    1974-01-01

    A coaxial precision waveguide-below-cutoff attenuator is described which uses an absolute (unambiguous) electronic digital readout of displacement in inches in addition to the usual gear driven mechanical counter-dial readout in decibels. The attenuator is rack-mountable and has the input and output RF connectors in a fixed position. The attenuation rate for 55, 50, and 30 MHz operation is given along with a discussion of sources of errors. In addition, information is included to aid the user in making adjustments on the attenuator should it be damaged or disassembled for any reason.

  12. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, Pat; ,

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  13. Interfraction Liver Shape Variability and Impact on GTV Position During Liver Stereotactic Radiotherapy Using Abdominal Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eccles, Cynthia L., E-mail: cynthia.eccles@rob.ox.ac.uk; Dawson, Laura A.; Moseley, Joanne L.

    2011-07-01

    Purpose: For patients receiving liver stereotactic body radiotherapy (SBRT), abdominal compression can reduce organ motion, and daily image guidance can reduce setup error. The reproducibility of liver shape under compression may impact treatment delivery accuracy. The purpose of this study was to measure the interfractional variability in liver shape under compression, after best-fit rigid liver-to-liver registration from kilovoltage (kV) cone beam computed tomography (CBCT) scans to planning computed tomography (CT) scans and its impact on gross tumor volume (GTV) position. Methods and Materials: Evaluable patients were treated in a Research Ethics Board-approved SBRT six-fraction study with abdominal compression. Kilovoltage CBCTmore » scans were acquired before treatment and reconstructed as respiratory sorted CBCT scans offline. Manual rigid liver-to-liver registrations were performed from exhale-phase CBCT scans to exhale planning CT scans. Each CBCT liver was contoured, exported, and compared with the planning CT scan for spatial differences, by use of in house-developed finite-element model-based deformable registration (MORFEUS). Results: We evaluated 83 CBCT scans from 16 patients with 30 GTVs. The mean volume of liver that deformed by greater than 3 mm was 21.7%. Excluding 1 outlier, the maximum volume that deformed by greater than 3 mm was 36.3% in a single patient. Over all patients, the absolute maximum deformations in the left-right (LR), anterior-posterior (AP), and superior-inferior directions were 10.5 mm (SD, 2.2), 12.9 mm (SD, 3.6), and 5.6 mm (SD, 2.7), respectively. The absolute mean predicted impact of liver volume displacements on GTV by use of center of mass displacements was 0.09 mm (SD, 0.13), 0.13 mm (SD, 0.18), and 0.08 mm (SD, 0.07) in the left-right, anterior-posterior, and superior-inferior directions, respectively. Conclusions: Interfraction liver deformations in patients undergoing SBRT under abdominal compression after rigid liver-to-liver registrations on respiratory sorted CBCT scans were small in most patients (<5 mm).« less

  14. [Fire behavior of Mongolian oak leaves fuel bed under no-wind and zero-slope conditions. II. Analysis of the factors affecting flame length and residence time and related prediction models].

    PubMed

    Zhang, Ji-Li; Liu, Bo-Fei; Di, Xue-Ying; Chu, Teng-Fei; Jin, Sen

    2012-11-01

    Taking fuel moisture content, fuel loading, and fuel bed depth as controlling factors, the fuel beds of Mongolian oak leaves in Maoershan region of Northeast China in field were simulated, and a total of one hundred experimental burnings under no-wind and zero-slope conditions were conducted in laboratory, with the effects of the fuel moisture content, fuel loading, and fuel bed depth on the flame length and its residence time analyzed and the multivariate linear prediction models constructed. The results indicated that fuel moisture content had a significant negative liner correlation with flame length, but less correlation with flame residence time. Both the fuel loading and the fuel bed depth were significantly positively correlated with flame length and its residence time. The interactions of fuel bed depth with fuel moisture content and fuel loading had significant effects on the flame length, while the interactions of fuel moisture content with fuel loading and fuel bed depth affected the flame residence time significantly. The prediction model of flame length had better prediction effect, which could explain 83.3% of variance, with a mean absolute error of 7.8 cm and a mean relative error of 16.2%, while the prediction model of flame residence time was not good enough, which could only explain 54% of variance, with a mean absolute error of 9.2 s and a mean relative error of 18.6%.

  15. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  16. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    PubMed

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  17. Relative and absolute test-retest reliabilities of pressure pain threshold in patients with knee osteoarthritis.

    PubMed

    Srimurugan Pratheep, Neeraja; Madeleine, Pascal; Arendt-Nielsen, Lars

    2018-04-25

    Pressure pain threshold (PPT) and PPT maps are commonly used to quantify and visualize mechanical pain sensitivity. Although PPT's have frequently been reported from patients with knee osteoarthritis (KOA), the absolute and relative reliability of PPT assessments remain to be determined. Thus, the purpose of this study was to evaluate the test-retest relative and absolute reliability of PPT in KOA. For that purpose, intra- and interclass correlation coefficient (ICC) as well as the standard error of measurement (SEM) and the minimal detectable change (MDC) values within eight anatomical locations covering the most painful knee of KOA patients was measured. Twenty KOA patients participated in two sessions with a period of 2 weeks±3 days apart. PPT's were assessed over eight anatomical locations covering the knee and two remote locations over tibialis anterior and brachioradialis. The patients rated their maximum pain intensity during the past 24 h and prior to the recordings on a visual analog scale (VAS), and completed The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and PainDetect surveys. The ICC, SEM and MDC between the sessions were assessed. The ICC for the individual variability was expressed with coefficient of variance (CV). Bland-Altman plots were used to assess potential bias in the dataset. The ICC ranged from 0.85 to 0.96 for all the anatomical locations which is considered "almost perfect". CV was lowest in session 1 and ranged from 44.2 to 57.6%. SEM for comparison ranged between 34 and 71 kPa and MDC ranged between 93 and 197 kPa with a mean PPT ranged from 273.5 to 367.7 kPa in session 1 and 268.1-331.3 kPa in session 2. The analysis of Bland-Altman plot showed no systematic bias. PPT maps showed that the patients had lower thresholds in session 2, but no significant difference was observed for the comparison between the sessions for PPT or VAS. No correlations were seen between PainDetect and PPT and PainDetect and WOMAC. Almost perfect relative and absolute reliabilities were found for the assessment of PPT's for KOA patients. The present investigation implicates that PPT's is reliable for assessing pain sensitivity and sensitization in KOA patients.

  18. Improving the Glucose Meter Error Grid With the Taguchi Loss Function.

    PubMed

    Krouwer, Jan S

    2016-07-01

    Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.

  19. Extreme values and the level-crossing problem: An application to the Feller process

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume

    2014-04-01

    We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.

  20. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  1. Time Series Forecasting of Daily Reference Evapotranspiration by Neural Network Ensemble Learning for Irrigation System

    NASA Astrophysics Data System (ADS)

    Manikumari, N.; Murugappan, A.; Vinodhini, G.

    2017-07-01

    Time series forecasting has gained remarkable interest of researchers in the last few decades. Neural networks based time series forecasting have been employed in various application areas. Reference Evapotranspiration (ETO) is one of the most important components of the hydrologic cycle and its precise assessment is vital in water balance and crop yield estimation, water resources system design and management. This work aimed at achieving accurate time series forecast of ETO using a combination of neural network approaches. This work was carried out using data collected in the command area of VEERANAM Tank during the period 2004 - 2014 in India. In this work, the Neural Network (NN) models were combined by ensemble learning in order to improve the accuracy for forecasting Daily ETO (for the year 2015). Bagged Neural Network (Bagged-NN) and Boosted Neural Network (Boosted-NN) ensemble learning were employed. It has been proved that Bagged-NN and Boosted-NN ensemble models are better than individual NN models in terms of accuracy. Among the ensemble models, Boosted-NN reduces the forecasting errors compared to Bagged-NN and individual NNs. Regression co-efficient, Mean Absolute Deviation, Mean Absolute Percentage error and Root Mean Square Error also ascertain that Boosted-NN lead to improved ETO forecasting performance.

  2. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  3. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  4. Cervical joint position sense in rugby players versus non-rugby players.

    PubMed

    Pinsault, Nicolas; Anxionnaz, Marion; Vuillerme, Nicolas

    2010-05-01

    To determine whether cervical joint position sense is modified by intensive rugby practice. A group-comparison study. University Medical Bioengineering Laboratory. Twenty young elite rugby players (10 forwards and 10 backs) and 10 young non-rugby elite sports players. Participants were asked to perform the cervicocephalic relocation test (CRT) to the neutral head position (NHP) that is, to reposition their head on their trunk, as accurately as possible, after full active left and right cervical rotation. Rugby players were asked to perform the CRT to NHP before and after a training session. Absolute and variable errors were used to assess accuracy and consistency of the repositioning for the three groups of Forwards, Backs and Non-rugby players, respectively. The 2 groups of Forwards and Backs exhibited higher absolute and variable errors than the group of Non-rugby players. No difference was found between the two groups of Forwards and Backs and no difference was found between Before and After the training session. The cervical joint position sense of young elite rugby players is altered compared to that of non-rugby players. Furthermore, Forwards and Backs demonstrated comparable repositioning errors before and after a specific training session, suggesting that cervical proprioceptive alteration is mainly due to tackling and not the scrum.

  5. Simulation and analysis of spectroscopic filter of rotational Raman lidar for absolute measurement of atmospheric temperature

    NASA Astrophysics Data System (ADS)

    Li, Qimeng; Li, Shichun; Hu, Xianglong; Zhao, Jing; Xin, Wenhui; Song, Yuehui; Hua, Dengxin

    2018-01-01

    The absolute measurement technique for atmospheric temperature can avoid the calibration process and improve the measurement accuracy. To achieve the rotational Raman temperature lidar of absolute measurement, the two-stage parallel multi-channel spectroscopic filter combined a first-order blazed grating with a fiber Bragg grating is designed and its performance is tested. The parameters and the optical path structure of the core cascaded-device (micron-level fiber array) are optimized, the optical path of the primary spectroscope is simulated and the maximum centrifugal distortion of the rotational Raman spectrum is approximately 0.0031 nm, the centrifugal ratio of 0.69%. The experimental results show that the channel coefficients of the primary spectroscope are 0.67, 0.91, 0.67, 0.75, 0.82, 0.63, 0.87, 0.97, 0.89, 0.87 and 1 by using the twelfth channel as a reference and the average FWHM is about 0.44 nm. The maximum deviation between the experimental wavelength and the theoretical value is approximately 0.0398 nm, with the deviation degree of 8.86%. The effective suppression to elastic scattering signal are 30.6, 35.2, 37.1, 38.4, 36.8, 38.2, 41.0, 44.3, 44.0, 46.7 dB. That means, combined with the second spectroscope, the suppression at least is up to 65 dB. Therefore we can fine extract single rotational Raman line to achieve the absolute measurement technique.

  6. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  7. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  8. Erratum: Analogous intruder behavior near Ni, Sn, and Pb isotopes [Phys. Rev. C 92, 024319 (2015)

    DOE PAGES

    Liddick, S. N.; Walters, W. B.; Chiara, C. J.; ...

    2016-12-30

    Here, the interpretation of neutron knockout reactions leading to 69Ni, a transcription error was discovered in the third column of Table I in the original paper, reporting the absolute γ -ray intensities of transitions in 69Ni.

  9. 75 FR 29972 - Certain Seamless Carbon and Alloy Steel Standard, Line, and Pressure Pipe from the People's...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... errors, (1) would result in a change of at least five absolute percentage points in, but not less than 25... determination; or (2) would result in a difference between a weighted-average dumping margin of zero or de...

  10. Optimal Design of the Absolute Positioning Sensor for a High-Speed Maglev Train and Research on Its Fault Diagnosis

    PubMed Central

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project. PMID:23112619

  11. Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data

    PubMed Central

    Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha

    2016-01-01

    Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059

  12. Effect of single vision soft contact lenses on peripheral refraction.

    PubMed

    Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen

    2012-07-01

    To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.

  13. Delivery of tidal volume from four anaesthesia ventilators during volume-controlled ventilation: a bench study.

    PubMed

    Wallon, G; Bonnet, A; Guérin, C

    2013-06-01

    Tidal volume (V(T)) must be accurately delivered by anaesthesia ventilators in the volume-controlled ventilation mode in order for lung protective ventilation to be effective. However, the impact of fresh gas flow (FGF) and lung mechanics on delivery of V(T) by the newest anaesthesia ventilators has not been reported. We measured delivered V(T) (V(TI)) from four anaesthesia ventilators (Aisys™, Flow-i™, Primus™, and Zeus™) on a pneumatic test lung set with three combinations of lung compliance (C, ml cm H2O(-1)) and resistance (R, cm H2O litre(-1) s(-2)): C60R5, C30R5, C60R20. For each CR, three FGF rates (0.5, 3, 10 litre min(-1)) were investigated at three set V(T)s (300, 500, 800 ml) and two values of PEEP (0 and 10 cm H2O). The volume error = [(V(TI) - V(Tset))/V(Tset)] ×100 was computed in body temperature and pressure-saturated conditions and compared using analysis of variance. For each CR and each set V(T), the absolute value of the volume error significantly declined from Aisys™ to Flow-i™, Zeus™, and Primus™. For C60R5, these values were 12.5% for Aisys™, 5% for Flow-i™ and Zeus™, and 0% for Primus™. With an increase in FGF, absolute values of the volume error increased only for Aisys™ and Zeus™. However, in C30R5, the volume error was minimal at mid-FGF for Aisys™. The results were similar at PEEP 10 cm H2O. Under experimental conditions, the volume error differed significantly between the four new anaesthesia ventilators tested and was influenced by FGF, although this effect may not be clinically relevant.

  14. A Hybrid Model for Predicting the Prevalence of Schistosomiasis in Humans of Qianjiang City, China

    PubMed Central

    Wang, Ying; Lu, Zhouqin; Tian, Lihong; Tan, Li; Shi, Yun; Nie, Shaofa; Liu, Li

    2014-01-01

    Backgrounds/Objective Schistosomiasis is still a major public health problem in China, despite the fact that the government has implemented a series of strategies to prevent and control the spread of the parasitic disease. Advanced warning and reliable forecasting can help policymakers to adjust and implement strategies more effectively, which will lead to the control and elimination of schistosomiasis. Our aim is to explore the application of a hybrid forecasting model to track the trends of the prevalence of schistosomiasis in humans, which provides a methodological basis for predicting and detecting schistosomiasis infection in endemic areas. Methods A hybrid approach combining the autoregressive integrated moving average (ARIMA) model and the nonlinear autoregressive neural network (NARNN) model to forecast the prevalence of schistosomiasis in the future four years. Forecasting performance was compared between the hybrid ARIMA-NARNN model, and the single ARIMA or the single NARNN model. Results The modelling mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model was 0.1869×10−4, 0.0029, 0.0419 with a corresponding testing error of 0.9375×10−4, 0.0081, 0.9064, respectively. These error values generated with the hybrid model were all lower than those obtained from the single ARIMA or NARNN model. The forecasting values were 0.75%, 0.80%, 0.76% and 0.77% in the future four years, which demonstrated a no-downward trend. Conclusion The hybrid model has high quality prediction accuracy in the prevalence of schistosomiasis, which provides a methodological basis for future schistosomiasis monitoring and control strategies in the study area. It is worth attempting to utilize the hybrid detection scheme in other schistosomiasis-endemic areas including other infectious diseases. PMID:25119882

  15. MRI of bone marrow in the distal radius: in vivo precision of effective transverse relaxation times

    NASA Technical Reports Server (NTRS)

    Grampp, S.; Majumdar, S.; Jergas, M.; Lang, P.; Gies, A.; Genant, H. K.

    1995-01-01

    The effective transverse relaxation time T2* is influenced by the presence of trabecular bone, and can potentially provide a measure of bone density as well as bone structure. We determined the in vivo precision of T2* in repeated bone marrow measurements. The T2* measurements of the bone marrow of the distal radius were performed twice within 2 weeks in six healthy young volunteers using a modified water-presaturated 3D Gradient-Recalled Acquisition at Steady State (GRASS) sequence with TE 7, 10, 12, 20, and 30; TR 67; flip angle (FA) 90 degrees. An axial volume covering a length of 5.6 cm in the distal radius was measured. Regions of interest (ROIs) were determined manually and consisted of the entire trabecular bone cross-section extending proximally from the radial subchondral endplate. Reproducibility of T2* and area measurements was expressed as the absolute precision error (standard deviation [SD] in ms or mm2) or as the relative precision error (SD/mean x 100, or coefficient of variation [CV] in %) between the two-point measurements. Short-term precision of T2* and area measurements varied depending on section thickness and location of the ROI in the distal radius. Absolute precision errors for T2* times were between 1.3 and 2.9 ms (relative precision errors 3.8-9.5 %) and for area measurements between 20 and 55 mm2 (relative precision errors 5.1-16.4%). This MR technique for quantitative assessment of trabecular bone density showed reasonable reproducibility in vivo and is a promising future tool for the assessment of osteoporosis.

  16. Achievable accuracy of hip screw holding power estimation by insertion torque measurement.

    PubMed

    Erani, Paolo; Baleani, Massimiliano

    2018-02-01

    To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Simulation on measurement of five-DOF motion errors of high precision spindle with cylindrical capacitive sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Wang, Wen; Xiang, Kui; Lu, Keqing; Fan, Zongwei

    2015-02-01

    This paper describes a novel cylindrical capacitive sensor (CCS) to measure the spindle five degree-of-freedom (DOF) motion errors. The operating principle and mathematical models of the CCS are presented. Using Ansoft Maxwell software to calculate the different capacitances in different configurations, structural parameters of end face electrode are then investigated. Radial, axial and tilt motions are also simulated by making comparisons with the given displacements and the simulation values respectively. It could be found that the proposed CCS has a high accuracy for measuring radial motion error when the average eccentricity is about 15 μm. Besides, the maximum relative error of axial displacement is 1.3% when the axial motion is within [0.7, 1.3] mm, and the maximum relative error of the tilt displacement is 1.6% as rotor tilts around a single axis within [-0.6, 0.6]°. Finally, the feasibility of the CCS for measuring five DOF motion errors is verified through simulation and analysis.

  18. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  19. Design implementation in model-reference adaptive systems. [application and implementation on space shuttle

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1973-01-01

    The derivation of an approximate error characteristic equation describing the transient system error response is given, along with a procedure for selecting adaptive gain parameters so as to relate to the transient error response. A detailed example of the application and implementation of these methods for a space shuttle type vehicle is included. An extension of the characteristic equation technique is used to provide an estimate of the magnitude of the maximum system error and an estimate of the time of occurrence of this maximum after a plant parameter disturbance. Techniques for relaxing certain stability requirements and the conditions under which this can be done and still guarantee asymptotic stability of the system error are discussed. Such conditions are possible because the Lyapunov methods used in the stability derivation allow for overconstraining a problem in the process of insuring stability.

  20. Reliability and Concurrent Validity of the Narrow Path Walking Test in Persons With Multiple Sclerosis.

    PubMed

    Rosenblum, Uri; Melzer, Itshak

    2017-01-01

    About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A159).

  1. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    PubMed

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  2. Measurement of cytoplasmic Ca2+ concentration in Saccharomyces cerevisiae induced by air cold plasma

    NASA Astrophysics Data System (ADS)

    Xiaoyu, DONG

    2018-03-01

    In this study, a novel approach to measure the absolute cytoplasmic Ca2+ concentration ([Ca2+]cyt) using the Ca2+ indicator fluo-3 AM was established. The parameters associated with the probe fluo-3 AM were optimized to accurately determine fluorescence intensity from the Ca2+-bound probe. Using three optimized parameters (final concentration of 6 mM probe, incubation time of 135 min, loading probe before plasma treatment), the maximum fluorescence intensity (F max = 527.8 a.u.) and the minimum fluorescence intensity (F min = 63.8 a.u.) were obtained in a saturated Ca2+ solution or a solution of lacking Ca2+. Correspondingly, the maximum [Ca2+]cyt induced by cold plasma was 1232.5 nM. Therefore, the Ca2+ indicator fluo-3 AM was successfully applied to measure the absolute [Ca2+]cyt in Saccharomyces cerevisiae stimulated by cold plasma at atmospheric air pressure.

  3. In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.

    PubMed

    Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J

    2004-01-01

    In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.

  4. Analytical correlation of centrifugal compressor design geometry for maximum efficiency with specific speed

    NASA Technical Reports Server (NTRS)

    Galvas, M. R.

    1972-01-01

    Centrifugal compressor performance was examined analytically to determine optimum geometry for various applications as characterized by specific speed. Seven specific losses were calculated for various combinations of inlet tip-exit diameter ratio, inlet hub-tip diameter ratio, blade exit backsweep, and inlet-tip absolute tangential velocity for solid body prewhirl. The losses considered were inlet guide vane loss, blade loading loss, skin friction loss, recirculation loss, disk friction loss, vaneless diffuser loss, and vaned diffuser loss. Maximum total efficiencies ranged from 0.497 to 0.868 for a specific speed range of 0.257 to 1.346. Curves of rotor exit absolute flow angle, inlet tip-exit diameter ratio, inlet hub-tip diameter ratio, head coefficient and blade exit backsweep are presented over a range of specific speeds for various inducer tip speeds to permit rapid selection of optimum compressor size and shape for a variety of applications.

  5. Enhanced control of a flexure-jointed micromanipulation system using a vision-based servoing approach

    NASA Astrophysics Data System (ADS)

    Chuthai, T.; Cole, M. O. T.; Wongratanaphisan, T.; Puangmali, P.

    2018-01-01

    This paper describes a high-precision motion control implementation for a flexure-jointed micromanipulator. A desktop experimental motion platform has been created based on a 3RUU parallel kinematic mechanism, driven by rotary voice coil actuators. The three arms supporting the platform have rigid links with compact flexure joints as integrated parts and are made by single-process 3D printing. The mechanism overall size is approximately 250x250x100 mm. The workspace is relatively large for a flexure-jointed mechanism, being approximately 20x20x6 mm. A servo-control implementation based on pseudo-rigid-body models (PRBM) of kinematic behavior combined with nonlinear-PID control has been developed. This is shown to achieve fast response with good noise-rejection and platform stability. However, large errors in absolute positioning occur due to deficiencies in the PRBM kinematics, which cannot accurately capture flexure compliance behavior. To overcome this problem, visual servoing is employed, where a digital microscopy system is used to directly measure the platform position by image processing. By adopting nonlinear PID feedback of measured angles for the actuated joints as inner control loops, combined with auxiliary feedback of vision-based measurements, the absolute positioning error can be eliminated. With controller gain tuning, fast dynamic response and low residual vibration of the end platform can be achieved with absolute positioning accuracy within ±1 micron.

  6. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    NASA Astrophysics Data System (ADS)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  7. Monitoring surface currents and transport variability in Drake Passage using altimetry and hydrography

    NASA Astrophysics Data System (ADS)

    Pavic, M.; Cunningham, S. A.; Challenor, P.; Duncan, L.

    2003-04-01

    Between 1993 and 2001 the UK has completed seven occupations of WOCE section SR1b from Burdwood Bank to Elephant Island across Drake Passage. The section consists of a minimum of 31 full depth CTD stations, shipboard ADCP measurements of currents in the upper 300m, and in three of the years full depth lowered ADCP measurements at each station. The section lies under the satellite track of ERS2. The satellite altimeter can determine the along track slope of the sea surface relative to a reference satellite pass once every 35 days. From this we can calculate the relative SSH slope or geostrophic surface current anomalies. If we measure simultaneously with any satellite pass, we can estimate the absolute surface geostrophic current for any subsequent pass. This says that by combining in situ absolute velocity measurements - the reference velocities with altimetry at one time the absolute geostrophic current can be estimated on any subsequent (or previous) altimeter pass. This is the method of Challenor et al. 1996, though they did not have the data to test this relationship. We have seven estimates of the surface reference velocity: one for each of the seven occupations of the WOCE line. The difference in any pair of reference velocities is predicted by the difference of the corresponding altimeter measurements. Errors in combining the satellite and hydrographic data are estimated by comparing pairs of these differences: errors arise from the in situ observations and from the altimetric measurements. Finally we produce our best estimates of eight years of absolute surface geostrophic currents and transport variability along WOCE section SR1 in Drake Passage.

  8. Accuracy of the Heidelberg Spectralis in the alignment between near-infrared image and tomographic scan in a model eye: a multicenter study.

    PubMed

    Barteselli, Giulio; Bartsch, Dirk-Uwe; Viola, Francesco; Mojana, Francesca; Pellegrini, Marco; Hartmann, Kathrin I; Benatti, Eleonora; Leicht, Simon; Ratiglia, Roberto; Staurenghi, Giovanni; Weinreb, Robert N; Freeman, William R

    2013-09-01

    To evaluate temporal changes and predictors of accuracy in the alignment between simultaneous near-infrared image and optical coherence tomography (OCT) scan on the Heidelberg Spectralis using a model eye. Laboratory investigation. After calibrating the device, 6 sites performed weekly testing of the alignment for 12 weeks using a model eye. The maximum error was compared with multiple variables to evaluate predictors of inaccurate alignment. Variables included the number of weekly scanned patients, total number of OCT scans and B-scans performed, room temperature and its variation, and working time of the scanning laser. A 4-week extension study was subsequently performed to analyze short-term changes in the alignment. The average maximum error in the alignment was 15 ± 6 μm; the greatest error was 35 μm. The error increased significantly at week 1 (P = .01), specifically after the second imaging study (P < .05); reached a maximum after the eighth patient (P < .001); and then varied randomly over time. Predictors for inaccurate alignment were temperature variation and scans per patient (P < .001). For each 1 unit of increase in temperature variation, the estimated increase in maximum error was 1.26 μm. For the average number of scans per patient, each increase of 1 unit increased the error by 0.34 μm. Overall, the accuracy of the Heidelberg Spectralis was excellent. The greatest error happened in the first week after calibration, and specifically after the second imaging study. To improve the accuracy, room temperature should be kept stable and unnecessary scans should be avoided. The alignment of the device does not need to be checked on a regular basis in the clinical setting, but it should be checked after every other patient for more precise research purposes. Published by Elsevier Inc.

  9. Technique for calibrating angular measurement devices when calibration standards are unavailable

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.

    1991-01-01

    A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.

  10. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  11. 40 CFR 98.123 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mass balance approach to estimate your fluorinated GHG emissions from a process, you must ensure that... relative errors associated with using the mass balance approach on that process using Equations L-1 through... mass-balance approach to estimate emissions from the process if this calculation results in an absolute...

  12. 40 CFR 98.123 - Calculating GHG emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... mass balance approach to estimate your fluorinated GHG emissions from a process, you must ensure that... relative errors associated with using the mass balance approach on that process using Equations L-1 through... mass-balance approach to estimate emissions from the process if this calculation results in an absolute...

  13. 40 CFR 98.123 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... mass balance approach to estimate your fluorinated GHG emissions from a process, you must ensure that... relative errors associated with using the mass balance approach on that process using Equations L-1 through... mass-balance approach to estimate emissions from the process if this calculation results in an absolute...

  14. 40 CFR 98.123 - Calculating GHG emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... mass balance approach to estimate your fluorinated GHG emissions from a process, you must ensure that... relative errors associated with using the mass balance approach on that process using Equations L-1 through... mass-balance approach to estimate emissions from the process if this calculation results in an absolute...

  15. SU-E-T-68: A Quality Assurance System with a Web Camera for High Dose Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueda, Y; Hirose, A; Oohira, S

    Purpose: The purpose of this work was to develop a quality assurance (QA) system for high dose rate (HDR) brachytherapy to verify the absolute position of an 192Ir source in real time and to measure dwell time and position of the source simultaneously with a movie recorded by a web camera. Methods: A web camera was fixed 15 cm above a source position check ruler to monitor and record 30 samples of the source position per second over a range of 8.0 cm, from 1425 mm to 1505 mm. Each frame had a matrix size of 480×640 in the movie.more » The source position was automatically quantified from the movie using in-house software (built with LabVIEW) that applied a template-matching technique. The source edge detected by the software on each frame was corrected to reduce position errors induced by incident light from an oblique direction. The dwell time was calculated by differential processing to displacement of the source. The performance of this QA system was illustrated by recording simple plans and comparing the measured dwell positions and time with the planned parameters. Results: This QA system allowed verification of the absolute position of the source in real time. The mean difference between automatic and manual detection of the source edge was 0.04 ± 0.04 mm. Absolute position error can be determined within an accuracy of 1.0 mm at dwell points of 1430, 1440, 1450, 1460, 1470, 1480, 1490, and 1500 mm, in three step sizes and dwell time errors, with an accuracy of 0.1% in more than 10.0 sec of planned time. The mean step size error was 0.1 ± 0.1 mm for a step size of 10.0 mm. Conclusion: This QA system provides quick verifications of the dwell position and time, with high accuracy, for HDR brachytherapy. This work was supported by the Japan Society for the Promotion of Science Core-to-Core program (No. 23003)« less

  16. A prediction model of short-term ionospheric foF2 Based on AdaBoost

    NASA Astrophysics Data System (ADS)

    Zhao, Xiukuan; Liu, Libo; Ning, Baiqi

    Accurate specifications of spatial and temporal variations of the ionosphere during geomagnetic quiet and disturbed conditions are critical for applications, such as HF communications, satellite positioning and navigation, power grids, pipelines, etc. Therefore, developing empirical models to forecast the ionospheric perturbations is of high priority in real applications. The critical frequency of the F2 layer, foF2, is an important ionospheric parameter, especially for radio wave propagation applications. In this paper, the AdaBoost-BP algorithm is used to construct a new model to predict the critical frequency of the ionospheric F2-layer one hour ahead. Different indices were used to characterize ionospheric diurnal and seasonal variations and their dependence on solar and geomagnetic activity. These indices, together with the current observed foF2 value, were input into the prediction model and the foF2 value at one hour ahead was output. We analyzed twenty-two years’ foF2 data from nine ionosonde stations in the East-Asian sector in this work. The first eleven years’ data were used as a training dataset and the second eleven years’ data were used as a testing dataset. The results show that the performance of AdaBoost-BP is better than those of BP Neural Network (BPNN), Support Vector Regression (SVR) and the IRI model. For example, the AdaBoost-BP prediction absolute error of foF2 at Irkutsk station (a middle latitude station) is 0.32 MHz, which is better than 0.34 MHz from BPNN, 0.35 MHz from SVR and also significantly outperforms the IRI model whose absolute error is 0.64 MHz. Meanwhile, AdaBoost-BP prediction absolute error at Taipei station from the low latitude is 0.78 MHz, which is better than 0.81 MHz from BPNN, 0.81 MHz from SVR and 1.37 MHz from the IRI model. Finally, the variety characteristics of the AdaBoost-BP prediction error along with seasonal variation, solar activity and latitude variation were also discussed in the paper.

  17. E2 and SN2 Reactions of X(-) + CH3CH2X (X = F, Cl); an ab Initio and DFT Benchmark Study.

    PubMed

    Bento, A Patrícia; Solà, Miquel; Bickelhaupt, F Matthias

    2008-06-01

    We have computed consistent benchmark potential energy surfaces (PESs) for the anti-E2, syn-E2, and SN2 pathways of X(-) + CH3CH2X with X = F and Cl. This benchmark has been used to evaluate the performance of 31 popular density functionals, covering local-density approximation, generalized gradient approximation (GGA), meta-GGA, and hybrid density-functional theory (DFT). The ab initio benchmark has been obtained by exploring the PESs using a hierarchical series of ab initio methods [up to CCSD(T)] in combination with a hierarchical series of Gaussian-type basis sets (up to aug-cc-pVQZ). Our best CCSD(T) estimates show that the overall barriers for the various pathways increase in the order anti-E2 (X = F) < SN2 (X = F) < SN2 (X = Cl) ∼ syn-E2 (X = F) < anti-E2 (X = Cl) < syn-E2 (X = Cl). Thus, anti-E2 dominates for F(-) + CH3CH2F, and SN2 dominates for Cl(-) + CH3CH2Cl, while syn-E2 is in all cases the least favorable pathway. Best overall agreement with our ab initio benchmark is obtained by representatives from each of the three categories of functionals, GGA, meta-GGA, and hybrid DFT, with mean absolute errors in, for example, central barriers of 4.3 (OPBE), 2.2 (M06-L), and 2.0 kcal/mol (M06), respectively. Importantly, the hybrid functional BHandH and the meta-GGA M06-L yield incorrect trends and qualitative features of the PESs (in particular, an erroneous preference for SN2 over the anti-E2 in the case of F(-) + CH3CH2F) even though they are among the best functionals as measured by their small mean absolute errors of 3.3 and 2.2 kcal/mol in reaction barriers. OLYP and B3LYP have somewhat higher mean absolute errors in central barriers (5.6 and 4.8 kcal/mol, respectively), but the error distribution is somewhat more uniform, and as a consequence, the correct trends are reproduced.

  18. The existence of negative absolute temperatures in Axelrod’s social influence model

    NASA Astrophysics Data System (ADS)

    Villegas-Febres, J. C.; Olivares-Rivas, W.

    2008-06-01

    We introduce the concept of temperature as an order parameter in the standard Axelrod’s social influence model. It is defined as the relation between suitably defined entropy and energy functions, T=(. We show that at the critical point, where the order/disorder transition occurs, this absolute temperature changes in sign. At this point, which corresponds to the transition homogeneous/heterogeneous culture, the entropy of the system shows a maximum. We discuss the relationship between the temperature and other properties of the model in terms of cultural traits.

  19. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  20. Automated body weight prediction of dairy cows using 3-dimensional vision.

    PubMed

    Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S

    2018-05-01

    The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

Top