Sample records for residual set-up errors

  1. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: Quality-assurance implications for target volume and organ-at-risk margination using daily CT-on-rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.

    2016-01-01

    Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151

  2. NOTE: Optimization of megavoltage CT scan registration settings for thoracic cases on helical tomotherapy

    NASA Astrophysics Data System (ADS)

    Woodford, Curtis; Yartsev, Slav; Van Dyk, Jake

    2007-08-01

    This study aims to investigate the settings that provide optimum registration accuracy when registering megavoltage CT (MVCT) studies acquired on tomotherapy with planning kilovoltage CT (kVCT) studies of patients with lung cancer. For each experiment, the systematic difference between the actual and planned positions of the thorax phantom was determined by setting the phantom up at the planning isocenter, generating and registering an MVCT study. The phantom was translated by 5 or 10 mm, MVCT scanned, and registration was performed again. A root-mean-square equation that calculated the residual error of the registration based on the known shift and systematic difference was used to assess the accuracy of the registration process. The phantom study results for 18 combinations of different MVCT/kVCT registration options are presented and compared to clinical registration data from 17 lung cancer patients. MVCT studies acquired with coarse (6 mm), normal (4 mm) and fine (2 mm) slice spacings could all be registered with similar residual errors. No specific combination of resolution and fusion selection technique resulted in a lower residual error. A scan length of 6 cm with any slice spacing registered with the full image fusion selection technique and fine resolution will result in a low residual error most of the time. On average, large corrections made manually by clinicians to the automatic registration values are infrequent. Small manual corrections within the residual error averages of the registration process occur, but their impact on the average patient position is small. Registrations using the full image fusion selection technique and fine resolution of 6 cm MVCT scans with coarse slices have a low residual error, and this strategy can be clinically used for lung cancer patients treated on tomotherapy. Automatic registration values are accurate on average, and a quick verification on a sagittal MVCT slice should be enough to detect registration outliers.

  3. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  4. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  5. Teaching Cancer Patients the Value of Correct Positioning During Radiotherapy Using Visual Aids and Practical Exercises.

    PubMed

    Hansen, Helle; Nielsen, Berit Kjærside; Boejen, Annette; Vestergaard, Anne

    2018-06-01

    The aim of this study was to investigate if teaching patients about positioning before radiotherapy treatment would (a) reduce the residual rotational set-up errors, (b) reduce the number of repositionings and (c) improve patients' sense of control by increasing self-efficacy and reducing distress. Patients were randomized to either standard care (control group) or standard care and a teaching session combining visual aids and practical exercises (intervention group). Daily images from the treatment sessions were evaluated off-line. Both groups filled in a questionnaire before and at the end of the treatment course on various aspects of cooperation with the staff regarding positioning. Comparisons of residual rotational set-up errors showed an improvement in the intervention group compared to the control group. No significant differences were found in number of repositionings, self-efficacy or distress. Results show that it is possible to teach patients about positioning and thereby improve precision in positioning. Teaching patients about positioning did not seem to affect self-efficacy or distress scores at baseline and at the end of the treatment course.

  6. SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barber, J; University of Sydney, Sydney, NSW; Sykes, J

    Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less

  7. Residue frequencies and pairing preferences at protein-protein interfaces.

    PubMed

    Glaser, F; Steinberg, D M; Vakser, I A; Ben-Tal, N

    2001-05-01

    We used a nonredundant set of 621 protein-protein interfaces of known high-resolution structure to derive residue composition and residue-residue contact preferences. The residue composition at the interfaces, in entire proteins and in whole genomes correlates well, indicating the statistical strength of the data set. Differences between amino acid distributions were observed for interfaces with buried surface area of less than 1,000 A(2) versus interfaces with area of more than 5,000 A(2). Hydrophobic residues were abundant in large interfaces while polar residues were more abundant in small interfaces. The largest residue-residue preferences at the interface were recorded for interactions between pairs of large hydrophobic residues, such as Trp and Leu, and the smallest preferences for pairs of small residues, such as Gly and Ala. On average, contacts between pairs of hydrophobic and polar residues were unfavorable, and the charged residues tended to pair subject to charge complementarity, in agreement with previous reports. A bootstrap procedure, lacking from previous studies, was used for error estimation. It showed that the statistical errors in the set of pairing preferences are generally small; the average standard error is approximately 0.2, i.e., about 8% of the average value of the pairwise index (2.9). However, for a few pairs (e.g., Ser-Ser and Glu-Asp) the standard error is larger in magnitude than the pairing index, which makes it impossible to tell whether contact formation is favorable or unfavorable. The results are interpreted using physicochemical factors and their implications for the energetics of complex formation and for protein docking are discussed. Proteins 2001;43:89-102. Copyright 2001 Wiley-Liss, Inc.

  8. Influence of erroneous patient records on population pharmacokinetic modeling and individual bayesian estimation.

    PubMed

    van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H

    2012-10-01

    Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.

  9. Analysis of GRACE Range-rate Residuals with Emphasis on Reprocessed Star-Camera Datasets

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.; Naeimi, M.; Bandikova, T.; Guerr, T. M.; Klinger, B.

    2015-12-01

    Since March 2002 the two GRACE satellites orbit the Earth at rela-tively low altitude. Determination of the gravity field of the Earth including itstemporal variations from the satellites' orbits and the inter-satellite measure-ments is the goal of the mission. Yet, the time-variable gravity signal has notbeen fully exploited. This can be seen better in the computed post-fit range-rateresiduals. The errors reflected in the range-rate residuals are due to the differ-ent sources as systematic errors, mismodelling errors and tone errors. Here, weanalyse the effect of three different star-camera data sets on the post-fit range-rate residuals. On the one hand, we consider the available attitude data andon other hand we take the two different data sets which has been reprocessedat Institute of Geodesy, Hannover and Institute of Theoretical Geodesy andSatellite Geodesy, TU Graz Austria respectively. Then the differences in therange-rate residuals computed from different attitude dataset are analyzed inthis study. Details will be given and results will be discussed.

  10. Capsule Performance Optimization in the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landen, O L; MacGowan, B J; Haan, S W

    2009-10-13

    A capsule performance optimization campaign will be conducted at the National Ignition Facility to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting themore » key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less

  11. Capsule performance optimization in the national ignition campaign

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; MacGowan, B. J.; Haan, S. W.; Edwards, J.

    2010-08-01

    A capsule performance optimization campaign will be conducted at the National Ignition Facility [1] to substantially increase the probability of ignition. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.

  12. Measuring the residual stress of transparent conductive oxide films on PET by the double-beam shadow Moiré interferometer

    NASA Astrophysics Data System (ADS)

    Chen, Hsi-Chao; Huang, Kuo-Ting; Lo, Yen-Ming; Chiu, Hsuan-Yi; Chen, Guan-Jhen

    2011-09-01

    The purpose of this research was to construct a measurement system which can fast and accurately analyze the residual stress of the flexible electronics. The transparent conductive oxide (TCO) films, tin-doped indium oxide (ITO), were deposited by radio frequency (RF) magnetron sputtering using corresponding oxide targets on PET substrate. As we know that the shadow Moiré interferometry is a useable way to measure the large deformation. So we set up a double beam shadow Moiré interferometer to measure and analyze the residual stress of TCO films on PET. The feature was to develop a mathematical model and combine the image processing software. By the LabVIEW graphical software, we could measure the distance which is between the left and right fringe on the pattern to solve the curvature of deformed surface. Hence, the residual stress could calculate by the Stoney correction formula for the flexible electronics. By combining phase shifting method with shadow Moiré, the measurement resolution and accuracy have been greatly improved. We also had done the error analysis for the system whose relative error could be about 2%. Therefore, shadow Moiré interferometer is a non-destructive, fast, and simple system for the residual stress on TCO/PET films.

  13. Cone beam CT-based set-up strategies with and without rotational correction for stereotactic body radiation therapy in the liver.

    PubMed

    Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per

    2017-06-01

    Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.

  14. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    PubMed

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  15. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  16. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  17. A General Method for Predicting Amino Acid Residues Experiencing Hydrogen Exchange

    PubMed Central

    Wang, Boshen; Perez-Rathke, Alan; Li, Renhao; Liang, Jie

    2018-01-01

    Information on protein hydrogen exchange can help delineate key regions involved in protein-protein interactions and provides important insight towards determining functional roles of genetic variants and their possible mechanisms in disease processes. Previous studies have shown that the degree of hydrogen exchange is affected by hydrogen bond formations, solvent accessibility, proximity to other residues, and experimental conditions. However, a general predictive method for identifying residues capable of hydrogen exchange transferable to a broad set of proteins is lacking. We have developed a machine learning method based on random forest that can predict whether a residue experiences hydrogen exchange. Using data from the Start2Fold database, which contains information on 13,306 residues (3,790 of which experience hydrogen exchange and 9,516 which do not exchange), our method achieves good performance. Specifically, we achieve an overall out-of-bag (OOB) error, an unbiased estimate of the test set error, of 20.3 percent. Using a randomly selected test data set consisting of 500 residues experiencing hydrogen exchange and 500 which do not, our method achieves an accuracy of 0.79, a recall of 0.74, a precision of 0.82, and an F1 score of 0.78.

  18. An active co-phasing imaging testbed with segmented mirrors

    NASA Astrophysics Data System (ADS)

    Zhao, Weirui; Cao, Genrui

    2011-06-01

    An active co-phasing imaging testbed with high accurate optical adjustment and control in nanometer scale was set up to validate the algorithms of piston and tip-tilt error sensing and real-time adjusting. Modularization design was adopted. The primary mirror was spherical and divided into three sub-mirrors. One of them was fixed and worked as reference segment, the others were adjustable respectively related to the fixed segment in three freedoms (piston, tip and tilt) by using sensitive micro-displacement actuators in the range of 15mm with a resolution of 3nm. The method of twodimension dispersed fringe analysis was used to sense the piston error between the adjacent segments in the range of 200μm with a repeatability of 2nm. And the tip-tilt error was gained with the method of centroid sensing. Co-phasing image could be realized by correcting the errors measured above with the sensitive micro-displacement actuators driven by a computer. The process of co-phasing error sensing and correcting could be monitored in real time by a scrutiny module set in this testbed. A FISBA interferometer was introduced to evaluate the co-phasing performance, and finally a total residual surface error of about 50nm rms was achieved.

  19. Capsule performance optimization in the National Ignition Campaigna)

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Glenzer, S. H.; Hamza, A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kirkwood, R. K.; Kyrala, G. A.; Michel, P.; Milovich, J.; Munro, D. H.; Nikroo, A.; Olson, R. E.; Robey, H. F.; Spears, B. K.; Thomas, C. A.; Weber, S. V.; Wilson, D. C.; Marinak, M. M.; Suter, L. J.; Hammel, B. A.; Meyerhofer, D. D.; Atherton, J.; Edwards, J.; Haan, S. W.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.

    2010-05-01

    A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition design and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.

  20. Capsule performance optimization in the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landen, O. L.; Bradley, D. K.; Braun, D. G.

    2010-05-15

    A capsule performance optimization campaign will be conducted at the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition by laser-driven hohlraums [J. D. Lindl et al., Phys. Plasmas 11, 339 (2004)]. The campaign will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. The required tuning techniques using a variety of ignition capsule surrogates have been demonstrated at the OMEGA facility under scaled hohlraum and capsule conditions relevant to the ignition designmore » and shown to meet the required sensitivity and accuracy. In addition, a roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget.« less

  1. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  2. The impact of seasonal signals on spatio-temporal filtering

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2016-04-01

    Existence of Common Mode Errors (CMEs) in permanent GNSS networks contribute to spatial and temporal correlation in residual time series. Time series from permanently observing GNSS stations of distance less than 2 000 km are similarly influenced by such CME sources as: mismodelling (Earth Orientation Parameters - EOP, satellite orbits or antenna phase center variations) during the process of the reference frame realization, large-scale atmospheric and hydrospheric effects as well as small scale crust deformations. Residuals obtained as a result of detrending and deseasonalising of topocentric GNSS time series arranged epoch-by-epoch form an observation matrix independently for each component (North, East, Up). CME is treated as internal structure of the data. Assuming a uniform temporal function across the network it is possible to filter CME out using PCA (Principal Component Analysis) approach. Some of above described CME sources may be reflected as a wide range of frequencies in GPS residual time series. In order to determine an impact of seasonal signals modeling to existence of spatial correlation in network and consequently the results of CME filtration, we chose two ways of modeling. The first approach was commonly presented by previous authors, who modeled with the Least-Squares Estimation (LSE) only annual and semi-annual oscillations. In the second one the set of residuals was a result of modeling of deterministic part that included fortnightly periods plus up to 9th harmonics of Chandlerian, tropical and draconitic oscillations. Correlation coefficients for residuals in parallel with KMO (Kaiser-Meyer-Olkin) statistic and Bartlett's test of sphericity were determined. For this research we used time series expressed in ITRF2008 provided by JPL (Jet Propulsion Laboratory). GPS processing was made using GIPSY-OASIS software in a PPP (Precise Point Positioning) mode. In order to form GPS station network that meet demands of uniform spatial response to the CME we chose 18 stations located in Central Europe. Created network extends up to 1500 kilometers. The KMO statistic indicate whether a component analysis may be useful for a chosen data set. We obtained KMO statistic value of 0.87 and 0.62 for residuals of Up component after first and second approaches were applied, what means that both residuals share common errors. Bartlett's test of sphericity analysis met a requirement that in both cases there are correlations in residuals. Another important results are the eigenvalues expressed as a percentage of the total variance explained by the first few components in PCA. For North, East and Up component we obtain respectively 68%, 75%, 65% and 47%, 54%, 52% after first and second approaches were applied. The results of CME filtration using PCA approach performed on both residual time series influence directly the uncertainty of the velocity of permanent stations. In our case spatial filtering reduces the uncertainty of velocity from 0.5 to 0.8 mm for horizontal components and from 0.6 to 0.9 mm on average for Up component when annual and semi-annual signals were assumed. Nevertheless, while second approach to the deterministic part modelling was used, deterioration of velocity uncertainty was noticed only for Up component, probably due to much higher autocorrelation in the time series when comparing to horizontal components.

  3. SU-F-T-638: Is There A Need For Immobilization in SRS?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masterova, K; Sethi, A; Anderson, D

    2016-06-15

    Purpose: Frameless Stereotactic radiosurgery (SRS) is increasingly used in the clinic. Cone-Beam CT (CBCT) to simulation-CT match has replaced the 3-dimensional coordinate based set up using a stereotactic localizing frame. The SRS frame however served as both a localizing and immobilizing device. We seek to measure the quality of frameless (mask based) and frame based immobilization and evaluate its impact on target dose. Methods: Each SRS patient was set up by kV on-board imaging (OBI) and then fine-tuned with CBCT. A second CBCT was done at treatment-end to ascertain intrafraction motion. We compared pre- vs post-treatment CBCT shifts for bothmore » frameless and frame based SRS patients. CBCT to sim-CT fusion was repeated for each patient off-line to assess systematic residual image registration error. Each patient was re-planned with measured shifts to assess effects on target dose. Results: We analyzed 11 patients (12 lesions) treated with frameless SRS and 6 patients (11 lesions) with a fixed frame system. Average intra-fraction iso-center positioning errors for frameless and frame-based treatments were 1.24 ± 0.57 mm and 0.28 ± 0.08 mm (mean ± s.d.) respectively. Residual error in CBCT registration was 0.24 mm. The frameless positioning uncertainties led to target dose errors in Dmin and D95 of 15.5 ± 18.4% and 6.6 ± 9.1% respectively. The corresponding errors in fixed frame SRS were much lower with Dmin and D95 reduced by 4.2 ± 6.5% and D95 2.5 ± 3.8% respectively. Conclusion: Frameless mask provides good immobilization with average patient motion of 1.2 mm during treatment. This exceeds MRI voxel dimensions (∼0.43mm) used for target delineation. Frame-based SRS provides superior patient immobilization with measureable movement no greater than the background noise of the CBCT registration. Small lesions requiring submm precision are better served with a frame based SRS.« less

  4. UCAC3: ASTROMETRIC REDUCTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, Charlie T.; Zacharias, Norbert; Wycoff, Gary L., E-mail: finch@usno.navy.mi

    2010-06-15

    Presented here are the details of the astrometric reductions from the x, y data to mean right ascension (R.A.), declination (decl.) coordinates of the third U.S. Naval Observatory CCD Astrograph Catalog (UCAC3). For these new reductions we used over 216,000 CCD exposures. The Two-Micron All-Sky Survey (2MASS) data are used extensively to probe for coordinate and coma-like systematic errors in UCAC data mainly caused by the poor charge transfer efficiency of the 4K CCD. Errors up to about 200 mas have been corrected using complex look-up tables handling multiple dependences derived from the residuals. Similarly, field distortions and sub-pixel phasemore » errors have also been evaluated using the residuals with respect to 2MASS. The overall magnitude equation is derived from UCAC calibration field observations alone, independent of external catalogs. Systematic errors of positions at the UCAC observing epoch as presented in UCAC3 are better corrected than in the previous catalogs for most stars. The Tycho-2 catalog is used to obtain final positions on the International Celestial Reference Frame. Residuals of the Tycho-2 reference stars show a small magnitude equation (depending on declination zone) that might be inherent in the Tycho-2 catalog.« less

  5. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  6. A discontinuous Poisson-Boltzmann equation with interfacial jump: homogenisation and residual error estimate.

    PubMed

    Fellner, Klemens; Kovtunenko, Victor A

    2016-01-01

    A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.

  7. Preliminary Studies for a CBCT Imaging Protocol for Offline Organ Motion Analysis: Registration Software Validation and CTDI Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falco, Maria Daniela, E-mail: mdanielafalco@hotmail.co; Fontanarosa, Davide; Miceli, Roberto

    2011-04-01

    Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index hasmore » been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4{sup o}. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22{sup o}). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be accurate, and its registration matrix can be easily translated into the TPS and a low dose is delivered to the patient during image acquisition. These results can help in designing imaging protocols for offline evaluations.« less

  8. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less

  9. Residual position errors of lymph node surrogates in breast cancer adjuvant radiotherapy: Comparison of two arm fixation devices and the effect of arm position correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapanen, Mika; Department of Medical Physics, Tampere University Hospital; Laaksomaa, Marko, E-mail: Marko.Laaksomaa@pshp.fi

    2016-04-01

    Residual position errors of the lymph node (LN) surrogates and humeral head (HH) were determined for 2 different arm fixation devices in radiotherapy (RT) of breast cancer: a standard wrist-hold (WH) and a house-made rod-hold (RH). The effect of arm position correction (APC) based on setup images was also investigated. A total of 113 consecutive patients with early-stage breast cancer with LN irradiation were retrospectively analyzed (53 and 60 using the WH and RH, respectively). Residual position errors of the LN surrogates (Th1-2 and clavicle) and the HH were investigated to compare the 2 fixation devices. The position errors andmore » setup margins were determined before and after the APC to investigate the efficacy of the APC in the treatment situation. A threshold of 5 mm was used for the residual errors of the clavicle and Th1-2 to perform the APC, and a threshold of 7 mm was used for the HH. The setup margins were calculated with the van Herk formula. Irradiated volumes of the HH were determined from RT treatment plans. With the WH and the RH, setup margins up to 8.1 and 6.7 mm should be used for the LN surrogates, and margins up to 4.6 and 3.6 mm should be used to spare the HH, respectively, without the APC. After the APC, the margins of the LN surrogates were equal to or less than 7.5/6.0 mm with the WH/RH, but margins up to 4.2/2.9 mm were required for the HH. The APC was needed at least once with both the devices for approximately 60% of the patients. With the RH, irradiated volume of the HH was approximately 2 times more than with the WH, without any dose constraints. Use of the RH together with the APC resulted in minimal residual position errors and setup margins for all the investigated bony landmarks. Based on the obtained results, we prefer the house-made RH. However, more attention should be given to minimize the irradiation of the HH with the RH than with the WH.« less

  10. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  11. Comparative assessment of LANDSAT-D MSS and TM data quality for mapping applications in the Southeast

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rectifications of multispectral scanner and thematic mapper data sets for full and subscene areas, analyses of planimetric errors, assessments of the number and distribution of ground control points required to minimize errors, and factors contributing to error residual are examined. Other investigations include the generation of three dimensional terrain models and the effects of spatial resolution on digital classification accuracies.

  12. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  13. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  14. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  15. Incorporating a prediction of postgrazing herbage mass into a whole-farm model for pasture-based dairy systems.

    PubMed

    Gregorini, P; Galli, J; Romera, A J; Levy, G; Macdonald, K A; Fernandez, H H; Beukes, P C

    2014-07-01

    The DairyNZ whole-farm model (WFM; DairyNZ, Hamilton, New Zealand) consists of a framework that links component models for animal, pastures, crops, and soils. The model was developed to assist with analysis and design of pasture-based farm systems. New (this work) and revised (e.g., cow, pasture, crops) component models can be added to the WFM, keeping the model flexible and up to date. Nevertheless, the WFM does not account for plant-animal relationships determining herbage-depletion dynamics. The user has to preset the maximum allowable level of herbage depletion [i.e., postgrazing herbage mass (residuals)] throughout the year. Because residuals have a direct effect on herbage regrowth, the WFM in its current form does not dynamically simulate the effect of grazing pressure on herbage depletion and consequent effect on herbage regrowth. The management of grazing pressure is a key component of pasture-based dairy systems. Thus, the main objective of the present work was to develop a new version of the WFM able to predict residuals, and thereby simulate related effects of grazing pressure dynamically at the farm scale. This objective was accomplished by incorporating a new component model into the WFM. This model represents plant-animal relationships, for example sward structure and herbage intake rate, and resulting level of herbage depletion. The sensitivity of the new version of the WFM was evaluated and then the new WFM was tested against an experimental data set previously used to evaluate the WFM and to illustrate the adequacy and improvement of the model development. Key outputs variables of the new version pertinent to this work (milk production, herbage dry matter intake, intake rate, harvesting efficiency, and residuals) responded acceptably to a range of input variables. The relative prediction errors for monthly and mean annual residual predictions were 20 and 5%, respectively. Monthly predictions of residuals had a line bias (1.5%), with a proportion of square root of mean square prediction error (RMSPE) due to random error of 97.5%. Predicted monthly herbage growth rates had a line bias of 2%, a proportion of RMSPE due to random error of 96%, and a concordance correlation coefficient of 0.87. Annual herbage production was predicted with an RMSPE of 531 (kg of herbage dry matter/ha per year), a line bias of 11%, a proportion of RMSPE due to random error of 80%, and relative prediction errors of 2%. Annual herbage dry matter intake per cow and hectare, both per year, were predicted with RMSPE, relative prediction error, and concordance correlation coefficient of 169 and 692kg of dry matter, 3 and 4%, and 0.91 and 0.87, respectively. These results indicate that predictions of the new WFM are relatively accurate and precise, with a conclusion that incorporating a plant-animal relationship model into the WFM allows for dynamic predictions of residuals and more realistic simulations of the effect of grazing pressure on herbage production and intake at the farm level without the intervention from the user. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Capsule Performance Optimization for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Landen, Otto

    2009-11-01

    The overall goal of the capsule performance optimization campaign is to maximize the probability of ignition by experimentally correcting for likely residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. This will be accomplished using a variety of targets that will set key laser, hohlraum and capsule parameters to maximize ignition capsule implosion velocity, while minimizing fuel adiabat, core shape asymmetry and ablator-fuel mix. The targets include high Z re-emission spheres setting foot symmetry through foot cone power balance [1], liquid Deuterium-filled ``keyhole'' targets setting shock speed and timing through the laser power profile [2], symmetry capsules setting peak cone power balance and hohlraum length [3], and streaked x-ray backlit imploding capsules setting ablator thickness [4]. We will show how results from successful tuning technique demonstration shots performed at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design meet the required sensitivity and accuracy. We will also present estimates of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors, and show that these get reduced after a number of shots and iterations to meet an acceptable level of residual uncertainty. Finally, we will present results from upcoming tuning technique validation shots performed at NIF at near full-scale. Prepared by LLNL under Contract DE-AC52-07NA27344. [4pt] [1] E. Dewald, et. al. Rev. Sci. Instrum. 79 (2008) 10E903. [0pt] [2] T.R. Boehly, et. al., Phys. Plasmas 16 (2009) 056302. [0pt] [3] G. Kyrala, et. al., BAPS 53 (2008) 247. [0pt] [4] D. Hicks, et. al., BAPS 53 (2008) 2.

  17. Capsule implosion optimization during the indirect-drive National Ignition Campaign

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; Edwards, J.; Haan, S. W.; Robey, H. F.; Milovich, J.; Spears, B. K.; Weber, S. V.; Clark, D. S.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.; Atherton, J.; Amendt, P. A.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Frenje, J. A.; Glenzer, S. H.; Hamza, A.; Hammel, B. A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kilkenny, J. D.; Kirkwood, R. K.; Kline, J. L.; Kyrala, G. A.; Marinak, M. M.; Meezan, N.; Meyerhofer, D. D.; Michel, P.; Munro, D. H.; Olson, R. E.; Nikroo, A.; Regan, S. P.; Suter, L. J.; Thomas, C. A.; Wilson, D. C.

    2011-05-01

    Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analytic models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.

  18. Prediction of protein tertiary structure from sequences using a very large back-propagation neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X.; Wilcox, G.L.

    1993-12-31

    We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less

  19. Assessment of EGM2008 in Europe using accurate astrogeodetic vertical deflections and omission error estimates from SRTM/DTM2006.0 residual terrain model data

    NASA Astrophysics Data System (ADS)

    Hirt, C.; Marti, U.; Bürki, B.; Featherstone, W. E.

    2010-10-01

    We assess the new EGM2008 Earth gravitational model using a set of 1056 astrogeodetic vertical deflections over parts of continental Europe. Our astrogeodetic vertical deflection data set originates from zenith camera observations performed during 1983-2008. This set, which is completely independent from EGM2008, covers, e.g., Switzerland, Germany, Portugal and Greece, and samples a variety of topography - level terrain, medium elevated and rugged Alpine areas. We describe how EGM2008 is used to compute vertical deflections according to Helmert's (surface) definition. Particular attention is paid to estimating the EGM2008 signal omission error from residual terrain model (RTM) data. The RTM data is obtained from the Shuttle Radar Topography Mission (SRTM) elevation model and the DTM2006.0 high degree spherical harmonic reference surface. The comparisons between the astrogeodetic and EGM2008 vertical deflections show an agreement of about 3 arc seconds (root mean square, RMS). Adding omission error estimates from RTM to EGM2008 significantly reduces the discrepancies from the complete European set of astrogeodetic deflections to 1 arc second (RMS). Depending on the region, the RMS errors vary between 0.4 and 1.5 arc seconds. These values not only reflect EGM2008 commission errors, but also short-scale mass-density anomalies not modelled from the RTM data. Given (1) formally stated EGM2008 commission error estimates of about 0.6-0.8 arc seconds for vertical deflections, and (2) that short-scale mass-density anomalies may affect vertical deflections by about 1 arc second, the agreement between EGM2008 and our astrogeodetic deflection data set is very good. Further focus is placed on the investigation of the high-degree spectral bands of EGM2008. As a general conclusion, EGM2008 - enhanced by RTM data - is capable of predicting Helmert vertical deflections at the 1 arc second accuracy level over Europe.

  20. In search of periodic signatures in IGS REPRO1 solution

    NASA Astrophysics Data System (ADS)

    Mtamakaya, J. D.; Santos, M. C.; Craymer, M. R.

    2010-12-01

    We have been looking for periodic signatures in the REPRO1 solution recently released by the IGS. At this stage, a selected sub-set of IGS station time series in position and residual domain are under harmonic analysis. We can learn different things from this analysis. From the position domain, we can learn more about actual station motions. From the residual domain, we can learn more about mis-modelled or un-modelled errors. As far as error sources are concerned, we have investigated effects that may be due to tides, atmospheric loading, definition of the position of the figure axis and GPS constellation geometry. This poster presents and discusses our findings and presents insights on errors that need to be modelled or have their models improved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattarivand, Mike; Summers, Clare; Robar, James

    Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients basedmore » on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.« less

  2. SU-F-P-18: Development of the Technical Training System for Patient Set-Up Considering Rotational Correction in the Virtual Environment Using Three-Dimensional Computer Graphic Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imura, K; Fujibuchi, T; Hirata, H

    Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to indicate efficient positional correction methods easily.« less

  3. Corrective Techniques and Future Directions for Treatment of Residual Refractive Error Following Cataract Surgery

    PubMed Central

    Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis

    2015-01-01

    Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies. PMID:25663845

  4. Corrective Techniques and Future Directions for Treatment of Residual Refractive Error Following Cataract Surgery.

    PubMed

    Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis

    2014-12-01

    Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies.

  5. MO-F-CAMPUS-T-05: Correct Or Not to Correct for Rotational Patient Set-Up Errors in Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briscoe, M; Ploquin, N; Voroney, JP

    2015-06-15

    Purpose: To quantify the effect of patient rotation in stereotactic radiation therapy and establish a threshold where rotational patient set-up errors have a significant impact on target coverage. Methods: To simulate rotational patient set-up errors, a Matlab code was created to rotate the patient dose distribution around the treatment isocentre, located centrally in the lesion, while keeping the structure contours in the original locations on the CT and MRI. Rotations of 1°, 3°, and 5° for each of the pitch, roll, and yaw, as well as simultaneous rotations of 1°, 3°, and 5° around all three axes were applied tomore » two types of brain lesions: brain metastasis and acoustic neuroma. In order to analyze multiple tumour shapes, these plans included small spherical (metastasis), elliptical (acoustic neuroma), and large irregular (metastasis) tumour structures. Dose-volume histograms and planning target volumes were compared between the planned patient positions and those with simulated rotational set-up errors. The RTOG conformity index for patient rotation was also investigated. Results: Examining the tumour volumes that received 80% of the prescription dose in the planned and rotated patient positions showed decreases in prescription dose coverage of up to 2.3%. Conformity indices for treatments with simulated rotational errors showed decreases of up to 3% compared to the original plan. For irregular lesions, degradation of 1% of the target coverage can be seen for rotations as low as 3°. Conclusions: This data shows that for elliptical or spherical targets, rotational patient set-up errors less than 3° around any or all axes do not have a significant impact on the dose delivered to the target volume or the conformity index of the plan. However the same rotational errors would have an impact on plans for irregular tumours.« less

  6. Spline curve matching with sparse knot sets

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2004-01-01

    This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...

  7. Capsule implosion optimization during the indirect-drive National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landen, O. L.; Edwards, J.; Haan, S. W.

    2011-05-15

    Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analyticmore » models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.« less

  8. SU-F-J-42: Comparison of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Cranial Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Shi, W; Andrews, D

    2016-06-15

    Purpose: To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac x-ray imaging systems for cranial radiotherapy. Method: Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (Version 2.5), which is integrated with a BrainLab ExacTrac imaging system (Version 6.1.1). The phantom study was based on a Rando head phantom, which was designed to evaluate isocenter-location dependence of the image registrations. Ten isocenters were selected at various locations in the phantom, which represented clinical treatment sites. CBCT and ExacTrac x-ray images were taken when the phantom was located at each isocenter. The patientmore » study included thirteen patients. CBCT and ExacTrac x-ray images were taken at each patient’s treatment position. Six-dimensional image registrations were performed on CBCT and ExacTrac, and residual errors calculated from CBCT and ExacTrac were compared. Results: In the phantom study, the average residual-error differences between CBCT and ExacTrac image registrations were: 0.16±0.10 mm, 0.35±0.20 mm, and 0.21±0.15 mm, in the vertical, longitudinal, and lateral directions, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.36±0.11 degree, 0.14±0.10 degree, and 0.12±0.10 degree, respectively. In the patient study, the average residual-error differences in the vertical, longitudinal, and lateral directions were: 0.13±0.13 mm, 0.37±0.21 mm, 0.22±0.17 mm, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.30±0.10 degree, 0.18±0.11 degree, and 0.22±0.13 degree, respectively. Larger residual-error differences (up to 0.79 mm) were observed in the longitudinal direction in the phantom and patient studies where isocenters were located in or close to frontal lobes, i.e., located superficially. Conclusion: Overall, the average residual-error differences were within 0.4 mm in the translational directions and were within 0.4 degree in the rotational directions.« less

  9. NMR structure calculation for all small molecule ligands and non-standard residues from the PDB Chemical Component Dictionary.

    PubMed

    Yilmaz, Emel Maden; Güntert, Peter

    2015-09-01

    An algorithm, CYLIB, is presented for converting molecular topology descriptions from the PDB Chemical Component Dictionary into CYANA residue library entries. The CYANA structure calculation algorithm uses torsion angle molecular dynamics for the efficient computation of three-dimensional structures from NMR-derived restraints. For this, the molecules have to be represented in torsion angle space with rotations around covalent single bonds as the only degrees of freedom. The molecule must be given a tree structure of torsion angles connecting rigid units composed of one or several atoms with fixed relative positions. Setting up CYANA residue library entries therefore involves, besides straightforward format conversion, the non-trivial step of defining a suitable tree structure of torsion angles, and to re-order the atoms in a way that is compatible with this tree structure. This can be done manually for small numbers of ligands but the process is time-consuming and error-prone. An automated method is necessary in order to handle the large number of different potential ligand molecules to be studied in drug design projects. Here, we present an algorithm for this purpose, and show that CYANA structure calculations can be performed with almost all small molecule ligands and non-standard amino acid residues in the PDB Chemical Component Dictionary.

  10. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  11. Analysis of Solar Spectral Irradiance Measurements from the SBUV/2-Series and the SSBUV Instruments

    NASA Technical Reports Server (NTRS)

    Cebula, Richard P.; DeLand, Matthew T.; Hilsenrath, Ernest

    1997-01-01

    During this period of performance, 1 March 1997 - 31 August 1997, the NOAA-11 SBUV/2 solar spectral irradiance data set was validated using both internal and external assessments. Initial quality checking revealed minor problems with the data (e.g. residual goniometric errors, that were manifest as differences between the two scans acquired each day). The sources of these errors were determined and the errors were corrected. Time series were constructed for selected wavelengths and the solar irradiance changes measured by the instrument were compared to a Mg II proxy-based model of short- and long-term solar irradiance variations. This analysis suggested that errors due to residual, uncorrected long-term instrument drift have been reduced to less than 1-2% over the entire 5.5 year NOAA-11 data record. Detailed statistical analysis was performed. This analysis, which will be documented in a manuscript now in preparation, conclusively demonstrates the evolution of solar rotation periodicity and strength during solar cycle 22.

  12. Set-up uncertainties: online correction with X-ray volume imaging.

    PubMed

    Kataria, Tejinder; Abhishek, Ashu; Chadha, Pranav; Nandigam, Janardhan

    2011-01-01

    To determine interfractional three-dimensional set-up errors using X-ray volumetric imaging (XVI). Between December 2007 and August 2009, 125 patients were taken up for image-guided radiotherapy using online XVI. After matching of reference and acquired volume view images, set-up errors in three translation directions were recorded and corrected online before treatment each day. Mean displacements, population systematic (Σ), and random (σ) errors were calculated and analyzed using SPSS (v16) software. Optimum clinical target volume (CTV) to planning target volume (PTV) margin was calculated using Van Herk's (2.5Σ + 0.7 σ) and Stroom's (2Σ + 0.7 σ) formula. Patients were grouped in 4 cohorts, namely brain, head and neck, thorax, and abdomen-pelvis. The mean vector displacement recorded were 0.18 cm, 0.15 cm, 0.36 cm, and 0.35 cm for brain, head and neck, thorax, and abdomen-pelvis, respectively. Analysis of individual mean set-up errors revealed good agreement with the proposed 0.3 cm isotropic margins for brain and 0.5 cm isotropic margins for head-neck. Similarly, 0.5 cm circumferential and 1 cm craniocaudal proposed margins were in agreement with thorax and abdomen-pelvic cases. The calculated mean displacements were well within CTV-PTV margin estimates of Van Herk (90% population coverage to minimum 95% prescribed dose) and Stroom (99% target volume coverage by 95% prescribed dose). Employing these individualized margins in a particular cohort ensure comparable target coverage as described in literature, which is further improved if XVI-aided set-up error detection and correction is used before treatment.

  13. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    USGS Publications Warehouse

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and data in subsequent rows. The user may choose the columns that contain the independent (X) and dependent (Y) variable. A third column, if present, may contain metadata such as the sample-collection location and date. The program screens the input files and plots the data. The KTRLine software is a graphical tool that facilitates development of regression models by use of graphs of the regression line with data, the regression residuals (with X or Y), and percentile plots of the cumulative frequency of the X variable, Y variable, and the regression residuals. The user may individually transform the independent and dependent variables to reduce heteroscedasticity and to linearize data. The program plots the data and the regression line. The program also prints model specifications and regression statistics to the screen. The user may save and print the regression results. The program can accept data sets that contain up to about 15,000 XY data points, but because the program must sort the array of all pairwise slopes, the program may be perceptibly slow with data sets that contain more than about 1,000 points.

  14. Laser in-situ keratomileusis for refractive error following radial keratotomy

    PubMed Central

    Sinha, Rajesh; Sharma, Namrata; Ahuja, Rakesh; Kumar, Chandrashekhar; Vajpayee, Rasik B

    2011-01-01

    Aim: To evaluate the safety and efficacy of laser in-situ keratomileusis (LASIK) in eyes with residual/induced refractive error following radial keratotomy (RK). Design: Retrospective study. Materials and Methods: A retrospective analysis of data of 18 eyes of 10 patients, who had undergone LASIK for refractive error following RK, was performed. All the patients had undergone RK in both eyes at least one year before LASIK. Parameters like uncorrected visual acuity (UCVA), best-corrected visual acuity (BCVA), contrast sensitivity, glare acuity and corneal parameters were evaluated both preoperatively and postoperatively. Statistical Software: STATA-9.0. Results: The mean UCVA before LASIK was 0.16±0.16 which improved to 0.64 ± 0.22 (P < 0.001) after one year following LASIK. Fourteen eyes (out of 18) had UCVA of ≥ 20/30 on Snellen's acuity chart at one year following LASIK. The mean BCVA before LASIK was 0.75 ± 0.18. This improved to 0.87 ± 0.16 at one year following LASIK. The mean spherical refractive error at the time of LASIK and at one year after the procedure was –5.37 ± 4.83 diopters (D) and –0.22 ± 1.45D, respectively. Only three eyes had a residual spherical refractive error of ≥ 1.0D at one year follow-up. In two eyes, we noted opening up of the RK incisions. No eye developed epithelial in-growth till 1 year after LASIK. Conclusion: LASIK is effective in treating refractive error following RK. However, it carries the risk of flap-related complications like opening up of the previously placed RK incisions and splitting of the corneal flap. PMID:21666312

  15. Assessment and quantification of patient set-up errors in nasopharyngeal cancer patients and their biological and dosimetric impact in terms of generalized equivalent uniform dose (gEUD), tumour control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F

    2015-06-01

    The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.

  16. Global Warming Estimation From Microwave Sounding Unit

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Dalu, G.

    1998-01-01

    Microwave Sounding Unit (MSU) Ch 2 data sets, collected from sequential, polar-orbiting, Sun-synchronous National Oceanic and Atmospheric Administration operational satellites, contain systematic calibration errors that are coupled to the diurnal temperature cycle over the globe. Since these coupled errors in MSU data differ between successive satellites, it is necessary to make compensatory adjustments to these multisatellite data sets in order to determine long-term global temperature change. With the aid of the observations during overlapping periods of successive satellites, we can determine such adjustments and use them to account for the coupled errors in the long-term time series of MSU Ch 2 global temperature. In turn, these adjusted MSU Ch 2 data sets can be used to yield global temperature trend. In a pioneering study, Spencer and Christy (SC) (1990) developed a procedure to derive the global temperature trend from MSU Ch 2 data. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedure, the magnitude of the coupled errors is not determined explicitly. Furthermore, based on some assumptions, these coupled errors are eliminated in three separate steps. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedures. Based on our analysis, we find there is a global warming of 0.23+/-0.12 K between 1980 and 1991. Also, in this study, the time series of global temperature anomalies constructed by removing the global mean annual temperature cycle compares favorably with a similar time series obtained from conventional observations of temperature.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, JY; Hong, DL

    Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed.more » Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.« less

  18. Optimizing X-ray mirror thermal performance using matched profile cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin; Cocco, Daniele; Kelez, Nicholas

    2015-08-07

    To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less

  19. Can the BMS Algorithm Decode Up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor Errors? Yes, but with Some Additional Remarks

    NASA Astrophysics Data System (ADS)

    Sakata, Shojiro; Fujisawa, Masaya

    It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.

  20. Impact of patient-specific factors, irradiated left ventricular volume, and treatment set-up errors on the development of myocardial perfusion defects after radiation therapy for left-sided breast cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Elizabeth S.; Prosnitz, Robert G.; Yu Xiaoli

    2006-11-15

    Purpose: The aim of this study was to assess the impact of patient-specific factors, left ventricle (LV) volume, and treatment set-up errors on the rate of perfusion defects 6 to 60 months post-radiation therapy (RT) in patients receiving tangential RT for left-sided breast cancer. Methods and Materials: Between 1998 and 2005, a total of 153 patients were enrolled onto an institutional review board-approved prospective study and had pre- and serial post-RT (6-60 months) cardiac perfusion scans to assess for perfusion defects. Of the patients, 108 had normal pre-RT perfusion scans and available follow-up data. The impact of patient-specific factors onmore » the rate of perfusion defects was assessed at various time points using univariate and multivariate analysis. The impact of set-up errors on the rate of perfusion defects was also analyzed using a one-tailed Fisher's Exact test. Results: Consistent with our prior results, the volume of LV in the RT field was the most significant predictor of perfusion defects on both univariate (p = 0.0005 to 0.0058) and multivariate analysis (p = 0.0026 to 0.0029). Body mass index (BMI) was the only significant patient-specific factor on both univariate (p = 0.0005 to 0.022) and multivariate analysis (p = 0.0091 to 0.05). In patients with very small volumes of LV in the planned RT fields, the rate of perfusion defects was significantly higher when the fields set-up 'too deep' (83% vs. 30%, p = 0.059). The frequency of deep set-up errors was significantly higher among patients with BMI {>=}25 kg/m{sup 2} compared with patients of normal weight (47% vs. 28%, p = 0.068). Conclusions: BMI {>=}25 kg/m{sup 2} may be a significant risk factor for cardiac toxicity after RT for left-sided breast cancer, possibly because of more frequent deep set-up errors resulting in the inclusion of additional heart in the RT fields. Further study is necessary to better understand the impact of patient-specific factors and set-up errors on the development of RT-induced perfusion defects.« less

  1. Improving automatic earthquake locations in subduction zones: a case study for GEOFON catalog of Tonga-Fiji region

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Heimann, Sebastian; Saul, Joachim; Tilmann, Frederik; Dahm, Torsten

    2015-04-01

    Automatic earthquake locations are sometimes associated with very large residuals up to 10 s even for clear arrivals, especially for regional stations in subduction zones because of their strongly heterogeneous velocity structure associated. Although these residuals are most likely not related to measurement errors but unmodelled velocity heterogeneity, these stations are usually removed from or down-weighted in the location procedure. While this is possible for large events, it may not be useful if the earthquake is weak. In this case, implementation of travel-time station corrections may significantly improve the automatic locations. Here, the shrinking box source-specific station term method (SSST) [Lin and Shearer, 2005] has been applied to improve relative location accuracy of 1678 events that occurred in the Tonga subduction zone between 2010 and mid-2014. Picks were obtained from the GEOFON earthquake bulletin for all available station networks. We calculated a set of timing corrections for each station which vary as a function of source position. A separate time correction was computed for each source-receiver path at the given station by smoothing the residual field over nearby events. We begin with a very large smoothing radius essentially encompassing the whole event set and iterate by progressively shrinking the smoothing radius. In this way, we attempted to correct for the systematic errors, that are introduced into the locations by the inaccuracies in the assumed velocity structure, without solving for a new velocity model itself. One of the advantages of the SSST technique is that the event location part of the calculation is separate from the station term calculation and can be performed using any single event location method. In this study, we applied a non-linear, probabilistic, global-search earthquake location method using the software package NonLinLoc [Lomax et al., 2000]. The non-linear location algorithm implemented in NonLinLoc is less sensitive to the problem of local misfit minima in the model space. Moreover, the spatial errors estimated by NonLinLoc are much more reliable than those derived by linearized algorithms. According to the obtained results, the root-mean-square (RMS) residual decreased from 1.37 s for the original GEOFON catalog (using a global 1-D velocity model without station specific corrections) to 0.90 s for our SSST catalog. Our results show 45-70% reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations. Additionally, our locations exhibit less scatter in depth and a sharper image of the seismicity associated with the subducting slab compared to the initial locations.

  2. Origins of coevolution between residues distant in protein 3D structures

    PubMed Central

    Ovchinnikov, Sergey; Kamisetty, Hetunandan; Baker, David

    2017-01-01

    Residue pairs that directly coevolve in protein families are generally close in protein 3D structures. Here we study the exceptions to this general trend—directly coevolving residue pairs that are distant in protein structures—to determine the origins of evolutionary pressure on spatially distant residues and to understand the sources of error in contact-based structure prediction. Over a set of 4,000 protein families, we find that 25% of directly coevolving residue pairs are separated by more than 5 Å in protein structures and 3% by more than 15 Å. The majority (91%) of directly coevolving residue pairs in the 5–15 Å range are found to be in contact in at least one homologous structure—these exceptions arise from structural variation in the family in the region containing the residues. Thirty-five percent of the exceptions greater than 15 Å are at homo-oligomeric interfaces, 19% arise from family structural variation, and 27% are in repeat proteins likely reflecting alignment errors. Of the remaining long-range exceptions (<1% of the total number of coupled pairs), many can be attributed to close interactions in an oligomeric state. Overall, the results suggest that directly coevolving residue pairs not in repeat proteins are spatially proximal in at least one biologically relevant protein conformation within the family; we find little evidence for direct coupling between residues at spatially separated allosteric and functional sites or for increased direct coupling between residue pairs on putative allosteric pathways connecting them. PMID:28784799

  3. Fungal solid state fermentation on agro-industrial wastes for acid wastewater decolorization in a continuous flow packed-bed bioreactor.

    PubMed

    Iandolo, Donata; Amore, Antonella; Birolo, Leila; Leo, Gabriella; Olivieri, Giuseppe; Faraco, Vincenza

    2011-08-01

    This study was aimed at developing a process of solid state fermentation (SSF) with the fungi Pleurotus ostreatus and Trametes versicolor on apple processing residues for wastewater decolorization. Both fungi were able to colonize apple residues without any addition of nutrients, material support or water. P. ostreatus produced the highest levels of laccases (up to 9U g(-1) of dry matter) and xylanases (up to 80U g(-1) of dry matter). A repeated batch decolorization experiment was set up with apple residues colonized by P. ostreatus, achieving 50% decolorization and 100% detoxification after 24h, and, adding fresh wastewater every 24h, a constant decolorization of 50% was measured for at least 1 month. A continuous decolorization experiment was set up by a packed-bed reactor based on colonized apple residues achieving a performance of 100mg dye L(-1)day(-1) at a retention time of 50h. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Bayesian inversions of a dynamic vegetation model in four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; François, L.

    2015-01-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.

  5. Tropospheric Correction for InSAR Using Interpolated ECMWF Data and GPS Zenith Total Delay

    NASA Technical Reports Server (NTRS)

    Webb, Frank H.; Fishbein, Evan F.; Moore, Angelyn W.; Owen, Susan E.; Fielding, Eric J.; Granger, Stephanie L.; Bjorndahl, Fredrik; Lofgren Johan

    2011-01-01

    To mitigate atmospheric errors caused by the troposphere, which is a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging, a tropospheric correction method has been developed using data from the European Centre for Medium- Range Weather Forecasts (ECMWF) and the Global Positioning System (GPS). The ECMWF data was interpolated using a Stretched Boundary Layer Model (SBLM), and ground-based GPS estimates of the tropospheric delay from the Southern California Integrated GPS Network were interpolated using modified Gaussian and inverse distance weighted interpolations. The resulting Zenith Total Delay (ZTD) correction maps have been evaluated, both separately and using a combination of the two data sets, for three short-interval InSAR pairs from Envisat during 2006 on an area stretching from northeast from the Los Angeles basin towards Death Valley. Results show that the root mean square (rms) in the InSAR images was greatly reduced, meaning a significant reduction in the atmospheric noise of up to 32 percent. However, for some of the images, the rms increased and large errors remained after applying the tropospheric correction. The residuals showed a constant gradient over the area, suggesting that a remaining orbit error from Envisat was present. The orbit reprocessing in ROI_pac and the plane fitting both require that the only remaining error in the InSAR image be the orbit error. If this is not fulfilled, the correction can be made anyway, but it will be done using all remaining errors assuming them to be orbit errors. By correcting for tropospheric noise, the biggest error source is removed, and the orbit error becomes apparent and can be corrected for

  6. Wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope system

    NASA Astrophysics Data System (ADS)

    Wei, Kai; Zhang, Xuejun; Xian, Hao; Rao, Changhui; Zhang, Yudong

    2010-05-01

    We present the wavefront error budget and optical manufacturing tolerance analysis for 1.8m telescope. The error budget accounts for aberrations induced by optical design residual, manufacturing error, mounting effects, and misalignments. The initial error budget has been generated from the top-down. There will also be an ongoing effort to track the errors from the bottom-up. This will aid in identifying critical areas of concern. The resolution of conflicts will involve a continual process of review and comparison of the top-down and bottom-up approaches, modifying both as needed to meet the top level requirements in the end. As we all know, the adaptive optical system will correct for some of the telescope system imperfections but it cannot be assumed that all errors will be corrected. Therefore, two kinds of error budgets will be presented, one is non-AO top-down error budget and the other is with-AO system error budget. The main advantage of the method is that at the same time it describes the final performance of the telescope, and gives to the optical manufacturer the maximum freedom to define and possibly modify its own manufacturing error budget.

  7. Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2014-01-01

    An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.

  8. Spline curve matching with sparse knot sets: applications to deformable shape detection and recognition

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2003-01-01

    Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...

  9. Assessment and quantification of patient set-up errors in nasopharyngeal cancer patients and their biological and dosimetric impact in terms of generalized equivalent uniform dose (gEUD), tumour control probability (TCP) and normal tissue complication probability (NTCP)

    PubMed Central

    Marcie, S; Fellah, M; Chami, S; Mekki, F

    2015-01-01

    Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients. PMID:25882689

  10. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  11. New approach for the identification of implausible values and outliers in longitudinal childhood anthropometric data.

    PubMed

    Shi, Joy; Korsiak, Jill; Roth, Daniel E

    2018-03-01

    We aimed to demonstrate the use of jackknife residuals to take advantage of the longitudinal nature of available growth data in assessing potential biologically implausible values and outliers. Artificial errors were induced in 5% of length, weight, and head circumference measurements, measured on 1211 participants from the Maternal Vitamin D for Infant Growth (MDIG) trial from birth to 24 months of age. Each child's sex- and age-standardized z-score or raw measurements were regressed as a function of age in child-specific models. Each error responsible for a biologically implausible decrease between a consecutive pair of measurements was identified based on the higher of the two absolute values of jackknife residuals in each pair. In further analyses, outliers were identified as those values beyond fixed cutoffs of the jackknife residuals (e.g., greater than +5 or less than -5 in primary analyses). Kappa, sensitivity, and specificity were calculated over 1000 simulations to assess the ability of the jackknife residual method to detect induced errors and to compare these methods with the use of conditional growth percentiles and conventional cross-sectional methods. Among the induced errors that resulted in a biologically implausible decrease in measurement between two consecutive values, the jackknife residual method identified the correct value in 84.3%-91.5% of these instances when applied to the sex- and age-standardized z-scores, with kappa values ranging from 0.685 to 0.795. Sensitivity and specificity of the jackknife method were higher than those of the conditional growth percentile method, but specificity was lower than for conventional cross-sectional methods. Using jackknife residuals provides a simple method to identify biologically implausible values and outliers in longitudinal child growth data sets in which each child contributes at least 4 serial measurements. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  12. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  13. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    PubMed

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  14. A comparison of two- and three-dimensional tracer transport within a stratospheric circulation model

    NASA Technical Reports Server (NTRS)

    Schneider, H.-R.; Geller, M. A.

    1985-01-01

    Use of the residual circulation for stratospheric tracer transport has been compared to a fully three-dimensional calculation. The wind fields used in this study were obtained from a global, semispectral, primitive equation model, extending from 10 to 100 km in altitude. Comparisons were done with a passive tracer and an ozone-like substance over a two-month period corresponding to a Northern Hemisphere winter. It was found that the use of the residual circulation can lead to errors in the tracer concentrations of about a factor of 2. The error is made up of two components. One is fluctuating with a period of approximately one month and reflects directly the wave transience that occurs on that time-scale. The second part is increasing steadily over the integration period and results from an overestimate of the vertical transport by the residual circulation. Furthermore, the equatorward and upward mixing that occurs with transport by the three-dimensional circulation at low latitudes is not well reproduced when the residual circulation is used.

  15. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.

  16. Imaging phased telescope array study

    NASA Technical Reports Server (NTRS)

    Harvey, James E.

    1989-01-01

    The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.

  17. Ultrasound visual feedback treatment and practice variability for residual speech sound errors

    PubMed Central

    Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin

    2014-01-01

    Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938

  18. Sources of medical error in refractive surgery.

    PubMed

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  19. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.

  20. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Positioning accuracy for lung stereotactic body radiotherapy patients determined by on-treatment cone-beam CT imaging

    PubMed Central

    Richmond, N D; Pilling, K E; Peedell, C; Shakespeare, D; Walker, C P

    2012-01-01

    Stereotactic body radiotherapy for early stage non-small cell lung cancer is an emerging treatment option in the UK. Since relatively few high-dose ablative fractions are delivered to a small target volume, the consequences of a geometric miss are potentially severe. This paper presents the results of treatment delivery set-up data collected using Elekta Synergy (Elekta, Crawley, UK) cone-beam CT imaging for 17 patients immobilised using the Bodyfix system (Medical Intelligence, Schwabmuenchen, Germany). Images were acquired on the linear accelerator at initial patient treatment set-up, following any position correction adjustments, and post-treatment. These were matched to the localisation CT scan using the Elekta XVI software. In total, 71 fractions were analysed for patient set-up errors. The mean vector error at initial set-up was calculated as 5.3±2.7 mm, which was significantly reduced to 1.4±0.7 mm following image guided correction. Post-treatment the corresponding value was 2.1±1.2 mm. The use of the Bodyfix abdominal compression plate on 5 patients to reduce the range of tumour excursion during respiration produced mean longitudinal set-up corrections of −4.4±4.5 mm compared with −0.7±2.6 mm without compression for the remaining 12 patients. The use of abdominal compression led to a greater variation in set-up errors and a shift in the mean value. PMID:22665927

  2. Genomic Prediction Accounting for Residual Heteroskedasticity

    PubMed Central

    Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.

    2015-01-01

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950

  3. Improving ROLO lunar albedo model using PLEIADES-HR satellites extra-terrestrial observations

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Blanchet, Gwendoline; Colzy, Stéphane; Gross-Colzy, Lydwine

    2017-09-01

    The accurate on orbit radiometric calibration of optical sensors has become a challenge for space agencies which have developed different technics involving on-board calibration systems, ground targets or extra-terrestrial targets. The combination of different approaches and targets is recommended whenever possible and necessary to reach or demonstrate a high accuracy. Among these calibration targets, the moon is widely used through the well-known ROLO (RObotic Lunar Observatory) model developed by USGS. A great and worldwide recognized work was done to characterize the moon albedo which is very stable. However the more and more demanding needs for calibration accuracy have reached the limitations of the model. This paper deals with two mains limitations: the residual error when modelling the phase angle dependency and the absolute accuracy of the model which is no more acceptable for the on orbit calibration of radiometers. Thanks to PLEIADES high resolution satellites agility, a significant data base of moon and stars images was acquired, allowing to show the limitations of ROLO model and to characterize the errors. The phase angle residual dependency is modelled using PLEIADES 1B images acquired for different quasi-complete moon cycles with a phase angle varying by less than 1°. The absolute albedo residual error is modelled using PLEIADES 1A images taken over stars and the moon. The accurate knowledge of the stars spectral irradiance is transferred to the moon spectral albedo using the satellite as a transfer radiometer. This paper describes the data set used, the ROLO model residual errors and their modelling, the quality of the proposed correction and show some calibration results using this improved model.

  4. Rib biomechanical properties exhibit diagnostic potential for accurate ageing in forensic investigations

    PubMed Central

    Bonicelli, Andrea; Xhemali, Bledar; Kranioti, Elena F.

    2017-01-01

    Age estimation remains one of the most challenging tasks in forensic practice when establishing a biological profile of unknown skeletonised remains. Morphological methods based on developmental markers of bones can provide accurate age estimates at a young age, but become highly unreliable for ages over 35 when all developmental markers disappear. This study explores the changes in the biomechanical properties of bone tissue and matrix, which continue to change with age even after skeletal maturity, and their potential value for age estimation. As a proof of concept we investigated the relationship of 28 variables at the macroscopic and microscopic level in rib autopsy samples from 24 individuals. Stepwise regression analysis produced a number of equations one of which with seven variables showed an R2 = 0.949; a mean residual error of 2.13 yrs ±0.4 (SD) and a maximum residual error value of 2.88 yrs. For forensic purposes, by using only bench top machines in tests which can be carried out within 36 hrs, a set of just 3 variables produced an equation with an R2 = 0.902 a mean residual error of 3.38 yrs ±2.6 (SD) and a maximum observed residual error 9.26yrs. This method outstrips all existing age-at-death methods based on ribs, thus providing a novel lab based accurate tool in the forensic investigation of human remains. The present application is optimised for fresh (uncompromised by taphonomic conditions) remains, but the potential of the principle and method is vast once the trends of the biomechanical variables are established for other environmental conditions and circumstances. PMID:28520764

  5. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  6. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders

    PubMed Central

    Hitchcock, Elaine R.; Ferron, John

    2017-01-01

    Purpose Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. Method This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Conclusions Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders. PMID:28595354

  7. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders.

    PubMed

    Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John

    2017-06-10

    Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, M A; Mignant, D L; Macintosh, B A

    In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less

  9. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.

    PubMed

    Zhang, Man; Wang, Guanyong; Zhang, Lei

    2017-10-26

    Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  10. Portal imaging based definition of the planning target volume during pelvic irradiation for gynecological malignancies.

    PubMed

    Mock, U; Dieckmann, K; Wolff, U; Knocke, T H; Pötter, R

    1999-08-01

    Geometrical accuracy in patient positioning can vary substantially during external radiotherapy. This study estimated the set-up accuracy during pelvic irradiation for gynecological malignancies for determination of safety margins (planning target volume, PTV). Based on electronic portal imaging devices (EPID), 25 patients undergoing 4-field pelvic irradiation for gynecological malignancies were analyzed with regard to set-up accuracy during the treatment course. Regularly performed EPID images were used in order to systematically assess the systematic and random component of set-up displacements. Anatomical matching of verification and simulation images was followed by measuring corresponding distances between the central axis and anatomical features. Data analysis of set-up errors referred to the x-, y-,and z-axes. Additionally, cumulative frequencies were evaluated. A total of 50 simulation films and 313 verification images were analyzed. For the anterior-posterior (AP) beam direction mean deviations along the x- and z-axes were 1.5 mm and -1.9 mm, respectively. Moreover, random errors of 4.8 mm (x-axis) and 3.0 mm (z-axis) were determined. Concerning the latero-lateral treatment fields, the systematic errors along the two axes were calculated to 2.9 mm (y-axis) and -2.0 mm (z-axis) and random errors of 3.8 mm and 3.5 mm were found, respectively. The cumulative frequency of misalignments < or =5 mm showed values of 75% (AP fields) and 72% (latero-lateral fields). With regard to cumulative frequencies < or =10 mm quantification revealed values of 97% for both beam directions. During external pelvic irradiation therapy for gynecological malignancies, EPID images on a regular basis revealed acceptable set-up inaccuracies. Safety margins (PTV) of 1 cm appear to be sufficient, accounting for more than 95% of all deviations.

  11. An Improved Empirical Harmonic Model of the Celestial Intermediate Pole Offsets from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Belda, Santiago; Heinkelmann, Robert; Ferrándiz, José M.; Karbon, Maria; Nilsson, Tobias; Schuh, Harald

    2017-10-01

    Very Long Baseline Interferometry (VLBI) is the only space geodetic technique capable of measuring all the Earth orientation parameters (EOP) accurately and simultaneously. Modeling the Earth's rotational motion in space within the stringent consistency goals of the Global Geodetic Observing System (GGOS) makes VLBI observations essential for constraining the rotation theories. However, the inaccuracy of early VLBI data and the outdated products could cause non-compliance with these goals. In this paper, we perform a global VLBI analysis of sessions with different processing settings to determine a new set of empirical corrections to the precession offsets and rates, and to the amplitudes of a wide set of terms included in the IAU 2006/2000A precession-nutation theory. We discuss the results in terms of consistency, systematic errors, and physics of the Earth. We find that the largest improvements w.r.t. the values from IAU 2006/2000A precession-nutation theory are associated with the longest periods (e.g., 18.6-yr nutation). A statistical analysis of the residuals shows that the provided corrections attain an error reduction at the level of 15 μas. Additionally, including a Free Core Nutation (FCN) model into a priori Celestial Pole Offsets (CPOs) provides the lowest Weighted Root Mean Square (WRMS) of residuals. We show that the CPO estimates are quite insensitive to TRF choice, but slightly sensitive to the a priori EOP and the inclusion of different VLBI sessions. Finally, the remaining residuals reveal two apparent retrograde signals with periods of nearly 2069 and 1034 days.

  12. Profile models for estimating log end diameters in the Rocky Mountain Region

    Treesearch

    Raymond L. Czaplewski; Amy S. Brown; Raymond C. Walker

    1989-01-01

    The segmented polynomial stem profile model of Max and Burkhart was applied to seven tree species in the Rocky Mountain Region of the Forest Service. Errors were reduced over the entire data set by use of second-stage models that adjust for transformation bias and explained weak patterns in the residual diameter predictions.

  13. Speed and Accuracy of Rapid Speech Output by Adolescents with Residual Speech Sound Errors Including Rhotics

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Edwards, Mary Louise

    2009-01-01

    Children with residual speech sound errors are often underserved clinically, yet there has been a lack of recent research elucidating the specific deficits in this population. Adolescents aged 10-14 with residual speech sound errors (RE) that included rhotics were compared to normally speaking peers on tasks assessing speed and accuracy of speech…

  14. Method for computing self-consistent solution in a gun code

    DOEpatents

    Nelson, Eric M

    2014-09-23

    Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.

  15. Approaches to stream solute load estimation for solutes with varying dynamics from five diverse small watershed

    USGS Publications Warehouse

    Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong

    2016-01-01

    Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.

  16. The use of propagation path corrections to improve regional seismic event location in western China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steck, L.K.; Cogbill, A.H.; Velasco, A.A.

    1999-03-01

    In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. Formore » individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.« less

  17. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  18. Dwell time method based on Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Ma, Zhen

    2017-10-01

    When the noise in the surface error data given by the interferometer has no effect on the iterative convergence of the RL algorithm, the RL algorithm for deconvolution in image restoration can be applied to the CCOS model to solve the dwell time. By extending the initial error function on the edge and denoising the noise in the surface error data given by the interferometer , it makes the result more available . The simulation results show the final residual error 10.7912nm nm in PV and 0.4305 nm in RMS, when the initial surface error is 107.2414 nm in PV and 15.1331 nm in RMS. The convergence rates of the PV and RMS values can reach up to 89.9% and 96.0%, respectively . The algorithms can satisfy the requirement of fabrication very well.

  19. Genomic Prediction Accounting for Residual Heteroskedasticity.

    PubMed

    Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M

    2015-11-12

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.

  20. Density-based cluster algorithms for the identification of core sets

    NASA Astrophysics Data System (ADS)

    Lemke, Oliver; Keller, Bettina G.

    2016-10-01

    The core-set approach is a discretization method for Markov state models of complex molecular dynamics. Core sets are disjoint metastable regions in the conformational space, which need to be known prior to the construction of the core-set model. We propose to use density-based cluster algorithms to identify the cores. We compare three different density-based cluster algorithms: the CNN, the DBSCAN, and the Jarvis-Patrick algorithm. While the core-set models based on the CNN and DBSCAN clustering are well-converged, constructing core-set models based on the Jarvis-Patrick clustering cannot be recommended. In a well-converged core-set model, the number of core sets is up to an order of magnitude smaller than the number of states in a conventional Markov state model with comparable approximation error. Moreover, using the density-based clustering one can extend the core-set method to systems which are not strongly metastable. This is important for the practical application of the core-set method because most biologically interesting systems are only marginally metastable. The key point is to perform a hierarchical density-based clustering while monitoring the structure of the metric matrix which appears in the core-set method. We test this approach on a molecular-dynamics simulation of a highly flexible 14-residue peptide. The resulting core-set models have a high spatial resolution and can distinguish between conformationally similar yet chemically different structures, such as register-shifted hairpin structures.

  1. The seasonal cycle of diabatic heat storage in the Pacific Ocean

    USGS Publications Warehouse

    White, Warren B.; Cayan, D.R.; Niiler, P.P.; Moisan, J.; Lagerloef, G.; Bonjean, F.; Legler, D.

    2005-01-01

    This study quantifies uncertainties in closing the seasonal cycle of diabatic heat storage (DHS) over the Pacific Ocean from 20??S to 60??N through the synthesis of World Ocean Circulation Experiment (WOCE) reanalysis products from 1993 to 1999. These products are DHS from Scripps Institution of Oceanography (SIO); near-surface geostrophic and Ekman currents from Earth and Space Research (ESR); and air-sea heat fluxes from Comprehensive Ocean-Atmosphere Data Set (COADS), National Centers for Environmental Prediction (NCEP), and European Center for Mid-Range Weather Forecasts (ECMWF). With these products, we compute residual heat budget components by differencing long-term monthly means from the long-term annual mean. This allows the seasonal cycle of the DHS tendency to be modeled. Everywhere latent heat flux residuals dominate sensible heat flux residuals, shortwave heat flux residuals dominate longwave heat flux residuals, and residual Ekman heat advection dominates residual geostrophic heat advection, with residual dissipation significant only in the Kuroshio-Oyashio current extension. The root-mean-square (RMS) of the differences between observed and model residual DHS tendencies (averaged over 10??latitude-by-20??longitude boxes) is <20 W m-2 in the interior ocean and <100 W m-2 in the Kuroshio-Oyashio current extension. This reveals that the residual DHS tendency is driven everywhere by some mix of residual latent heat flux, shortwave heat flux, and Ekman heat advection. Suppressing bias errors in residual air-sea turbulent heat fluxes and Ekman heat advection through minimization of the RMS differences reduces the latter to <10 W m-2 over the interior ocean and <25 W m -2 in the Kuroshio-Oyashio current extension. This reveals air-sea temperature and specific humidity differences from in situ surface marine weather observations to be a principal source of bias error, overestimated over most of ocean but underestimated near the Intertropical Convergence Zone. ?? 2005 Elsevier Ltd. All rights reserved.

  2. Parametric Studies Of Lightweight Reflectors Supported On Linear Actuator Arrays

    NASA Astrophysics Data System (ADS)

    Seibert, George E.

    1987-10-01

    This paper presents the results of numerous design studies carried out at Perkin-Elmer in support of the design of large diameter controllable mirrors for use in laser beam control, surveillance, and astronomy programs. The results include relationships between actuator location and spacing and the associated degree of correctability attainable for a variety of faceplate configurations subjected to typical disturbance environments. Normalizations and design curves obtained from closed-form equations based on thin shallow shell theory and computer based finite-element analyses are presented for use in preliminary design estimates of actuator count, faceplate structural properties, system performance prediction and weight assessments. The results of the analyses were obtained from a very wide range of mirror configurations, including both continuous and segmented mirror geometries. Typically, the designs consisted of a thin facesheet controlled by point force actuators which in turn were mounted on a structurally efficient base panel, or "reaction structure". The faceplate materials considered were fused silica, ULE fused silica, Zerodur, aluminum and beryllium. Thin solid faceplates as well as rib-reinforced cross-sections were treated, with a wide variation in thickness and/or rib patterns. The magnitude and spatial frequency distribution of the residual or uncorrected errors were related to the input error functions for mirrors of many different diameters and focal ratios. The error functions include simple sphere-to-sphere corrections, "parabolization" of spheres, and higher spatial frequency input error maps ranging from 0.5 to 7.5 cycles per diameter. The parameter which dominates all of the results obtained to date, is a structural descriptor of thin shell behavior called the characteristic length. This parameter is a function of the shell's radius of curvature, thickness, and Poisson's ratio of the material used. The value of this constant, in itself, describes the extent to which the deflection under a point force is localized by the shell's curvature. The deflection shape is typically a near-gaussian "bump" with a zero-crossing at a local radius of approximately 3.5 characteristic lengths. The amplitude is a function of the shells elastic modulus, radius, and thickness, and is linearly proportional to the applied force. This basic shell behavior is well-treated in an excellent set of papers by Eric Reissner entitled "Stresses and Small Displacements of Shallow Spherical Shells".1'2 Building on the insight offered by these papers, we developed our design tools around two derived parameters, the ratio of the mirror's diameter to its characteristic length (D/l), and the ratio of the actuator spacing to the characteristic length (b/l). The D/1 ratio determines the "finiteness" of the shell, or its dependence on edge boundary conditions. For D/1 values greater than 10, the influence of edges is almost totally absent on interior behavior. The b/1 ratio, the basis of all our normalizations is the most universal term in the description of correctability or ratio of residual/input errors. The data presented in the paper, shows that the rms residual error divided by the peak amplitude of the input error function is related to the actuator spacing to characteristic length ratio by the following expression RMS Residual Error b 3.5 k (I) (1) Initial Error Ampl. The value of k ranges from approximately 0.001 for low spatial frequency initial errors up to 0.05 for higher error frequencies (e.g. 5 cycles/diameter). The studies also yielded insight to the forces required to produce typical corrections at both the center and edges of the mirror panels. Additionally, the data lends itself to rapid evaluation of the effects of trading faceplate weight for increased actuator count,

  3. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  4. ACS/WFC Sky Flats from Frontier Fields Imaging

    NASA Astrophysics Data System (ADS)

    Mack, J.; Lucas, R. A.; Grogin, N. A.; Bohlin, R. C.; Koekemoer, A. M.

    2018-04-01

    Parallel imaging data from the HST Frontier Fields campaign (Lotz et al. 2017) have been used to compute sky flats for the ACS/WFC detector in order to verify the accuracy of the current set of flat field reference files. By masking sources and then co-adding many deep frames, the F606W and F814W filters have enough combined background signal that from Poisson statistics are <1% per pixel. In these two filters, the sky flats show spatial residuals 1% or less. These residuals are similar in shape to the WFC flat field 'donut' pattern, in which the detector quantum efficiency tracks the thickness of the two WFC chips. Observations of blue and red calibration standards measured at various positions on the detector (Bohlin et al. 2017) confirm the fidelity of the F814W flat, with aperture photometry consistent to 1% across the FOV, regardless of spectral type. At bluer wavelengths, the total sky background is substantially lower, and the F435W sky flat shows a combination of both flat errors and detector artifacts. Aperture photometry of the red standard star shows a maximum deviation of 1.4% across the array in this filter. Larger residuals up to 2.5% are found for the blue standard, suggesting that the spatial sensitivity in F435W depends on spectral type.

  5. Parametric models to compute tryptophan fluorescence wavelengths from classical protein simulations.

    PubMed

    Lopez, Alvaro J; Martínez, Leandro

    2018-02-26

    Fluorescence spectroscopy is an important method to study protein conformational dynamics and solvation structures. Tryptophan (Trp) residues are the most important and practical intrinsic probes for protein fluorescence due to the variability of their fluorescence wavelengths: Trp residues emit in wavelengths ranging from 308 to 360 nm depending on the local molecular environment. Fluorescence involves electronic transitions, thus its computational modeling is a challenging task. We show that it is possible to predict the wavelength of emission of a Trp residue from classical molecular dynamics simulations by computing the solvent-accessible surface area or the electrostatic interaction between the indole group and the rest of the system. Linear parametric models are obtained to predict the maximum emission wavelengths with standard errors of the order 5 nm. In a set of 19 proteins with emission wavelengths ranging from 308 to 352 nm, the best model predicts the maximum wavelength of emission with a standard error of 4.89 nm and a quadratic Pearson correlation coefficient of 0.81. These models can be used for the interpretation of fluorescence spectra of proteins with multiple Trp residues, or for which local Trp environmental variability exists and can be probed by classical molecular dynamics simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  6. Improved ambiguity resolution for URTK with dynamic atmosphere constraints

    NASA Astrophysics Data System (ADS)

    Tang, Weiming; Liu, Wenjian; Zou, Xuan; Li, Zongnan; Chen, Liang; Deng, Chenlong; Shi, Chuang

    2016-12-01

    Raw observation processing method with prior knowledge of ionospheric delay could strengthen the ambiguity resolution (AR), but it does not make full use of the relatively longer wavelength of wide-lane (WL) observation. Furthermore, the accuracy of calculated atmospheric delays from the regional augmentation information has quite different in quality, while the atmospheric constraint used in the current methods is usually set to an empirical value. A proper constraint, which matches the accuracy of calculated atmospheric delays, can most effectively compensate the residual systematic biases caused by large inter-station distances. Therefore, the standard deviation of the residual atmospheric parameters should be fine-tuned. This paper presents an atmosphere-constrained AR method for undifferenced network RTK (URTK) rover, whose ambiguities are sequentially fixed according to their wavelengths. Furthermore, this research systematically analyzes the residual atmospheric error and finds that it mainly varies along the positional relationship between the rover and the chosen reference stations. More importantly, its ionospheric part of certain location will also be cyclically influenced every day. Therefore, the standard deviation of residual ionospheric error can be modeled by a daily repeated cosine or other functions with the help of data one day before, and applied by rovers as pseudo-observation. With the data collected at 29 stations from a continuously operating reference station network in Guangdong Province (GDCORS) in China, the efficiency of the proposed approach is confirmed by improving the success and error rates of AR for 10-20 % compared to that of the WL-L1-IF one, as well as making much better positioning accuracy.

  7. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  8. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  9. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  10. High-resolution regional gravity field modelling in a mountainous area from terrestrial gravity data

    NASA Astrophysics Data System (ADS)

    Bucha, Blažej; Janák, Juraj; Papčo, Juraj; Bezděk, Aleš

    2016-11-01

    We develop a high-resolution regional gravity field model by a combination of spherical harmonics, band-limited spherical radial basis functions (SRBFs) and the residual terrain model (RTM) technique. As the main input data set, we employ a dense terrestrial gravity database (3-6 stations km-2), which enables gravity field modelling up to very short spatial scales. The approach is based on the remove-compute-restore methodology in which all the parts of the signal that can be modelled are removed prior to the least-squares adjustment in order to smooth the input gravity data. To this end, we utilize degree-2159 spherical harmonic models and the RTM technique using topographic models at 2 arcsec resolution. The residual short-scale gravity signal is modelled via the band-limited Shannon SRBF expanded up to degree 21 600, which corresponds to a spatial resolution of 30 arcsec. The combined model is validated against GNSS/levelling-based height anomalies, independent surface gravity data, deflections of the vertical and terrestrial vertical gravity gradients achieving an accuracy of 2.7 cm, 0.53 mGal, 0.39 arcsec and 279 E in terms of the RMS error, respectively. A key aspect of the combined approach, especially in mountainous areas, is the quality of the RTM. We therefore compare the performance of two RTM techniques within the innermost zone, the tesseroids and the polyhedron. It is shown that the polyhedron-based approach should be preferred in rugged terrain if a high-quality RTM is required. In addition, we deal with the RTM computations at points located below the reference surface of the residual terrain which is known to be a rather delicate issue.

  11. Continuous correction of differential path length factor in near-infrared spectroscopy

    PubMed Central

    Moore, Jason H.; Diamond, Solomon G.

    2013-01-01

    Abstract. In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method. PMID:23640027

  12. Continuous correction of differential path length factor in near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Talukdar, Tanveer; Moore, Jason H.; Diamond, Solomon G.

    2013-05-01

    In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method.

  13. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  14. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  15. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  16. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  17. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  18. Evaluating and improving the representation of heteroscedastic errors in hydrological models

    NASA Astrophysics Data System (ADS)

    McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.

    2013-12-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.

  19. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  20. Dissipation and residue of myclobutanil in lychee.

    PubMed

    Liu, Yanping; Sun, Haibin; Liu, Fengmao; Wang, Siwei

    2012-06-01

    The dissipation and residue of myclobutanil in lychee under field conditions were studied. To determine myclobutanil residue in samples, an analytical method with a florisil column clean-up and detected by gas chromatography-electron capture detector (GC-ECD) was developed. Recoveries were found in the range of 83.24 %-89.00 % with relative standard deviations of 2.67 %-9.88 %. This method was successfully applied to analyze the dissipation and residue of myclobutanil in lychee in Guangdong and Guangxi Province, China. The half lives in lychee were from 2.2 to 3.4 days. The residues of myclobutanil in lychee flesh were all below the limit of quantification (LOQ) value (0.01 mg/kg), and most of the residues were concentrated in the peel. The terminal residues of myclobutanil were all bellow the maximum residue limit (MRL) value set by European Union (EU) (0.02 mg/kg). Hence it was safe for the use of this pesticide and the results also could give a reference for MRL setting of myclobutanil in lychee in China.

  1. Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.

    1982-04-01

    This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.

  2. Second Chance: If at First You Do Not Succeed, Set up a Plan and Try, Try Again

    ERIC Educational Resources Information Center

    Poulsen, John

    2012-01-01

    Student teachers make errors in their practicum. Then, they learn and fix those errors. This is the standard arc within a successful practicum. Some students make errors that they do not fix and then make more errors that again remain unfixed. This downward spiral increases in pace until the classroom becomes chaos. These students at the…

  3. Overlay improvement by exposure map based mask registration optimization

    NASA Astrophysics Data System (ADS)

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.

  4. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  5. Comparing Planck and WMAP: Maps, Spectra, and Parameters

    NASA Astrophysics Data System (ADS)

    Larson, D.; Weiland, J. L.; Hinshaw, G.; Bennett, C. L.

    2015-03-01

    We examine the consistency of the 9 yr WMAP data and the first-release Planck data. We specifically compare sky maps, power spectra, and the inferred Λ cold dark matter (ΛCDM) cosmological parameters. Residual dipoles are seen in the WMAP and Planck sky map differences, but their amplitudes are consistent within the quoted uncertainties, and they are not large enough to explain the widely noted differences in angular power spectra at higher l. We remove the residual dipoles and use templates to remove residual Galactic foregrounds; after doing so, the residual difference maps exhibit a quadrupole and other large-scale systematic structure. We identify this structure as possibly originating from Planck’s beam sidelobe pick-up, but note that it appears to have insignificant cosmological impact. We develop an extension of the internal linear combination technique to find the minimum-variance difference between the WMAP and Planck sky maps; again we find features that plausibly originate in the Planck data. Lacking access to the Planck time-ordered data we cannot further assess these features. We examine ΛCDM model fits to the angular power spectra and conclude that the ˜2.5% difference in the spectra at multipoles greater than l˜ 100 is significant at the 3-5σ level, depending on how beam uncertainties are handled in the data. We revisit the analysis of WMAP’s beam data to address the power spectrum differences and conclude that previously derived uncertainties are robust and cannot explain the power spectrum differences. In fact, any remaining WMAP errors are most likely to exacerbate the difference. Finally, we examine the consistency of the ΛCDM parameters inferred from each data set taking into account the fact that both experiments observe the same sky, but cover different multipole ranges, apply different sky masks, and have different noise. We find that, while individual parameter values agree within the uncertainties, the six parameters taken together are discrepant at the ˜6σ level, with {χ }2}=56 for 6 degrees of freedom (probability to exceed, PTE = 3× {{10}-10}). The nature of this discrepancy is explored: of the six parameters, {{χ }2} is best improved by marginalizing over {{{Ω}c}{{h}2}, giving {χ }2}=5.2 for 5 degrees of freedom. As an exercise, we find that perturbing the WMAP window function by its dominant beam error profile has little effect on {{{Ω}c}{{h}2}, while perturbing the Planck window function by its corresponding error profile has a much greater effect on {{Ω}c}{{h}2}.

  6. Ionospheric Slant Total Electron Content Analysis Using Global Positioning System Based Estimation

    NASA Technical Reports Server (NTRS)

    Komjathy, Attila (Inventor); Mannucci, Anthony J. (Inventor); Sparks, Lawrence C. (Inventor)

    2017-01-01

    A method, system, apparatus, and computer program product provide the ability to analyze ionospheric slant total electron content (TEC) using global navigation satellite systems (GNSS)-based estimation. Slant TEC is estimated for a given set of raypath geometries by fitting historical GNSS data to a specified delay model. The accuracy of the specified delay model is estimated by computing delay estimate residuals and plotting a behavior of the delay estimate residuals. An ionospheric threat model is computed based on the specified delay model. Ionospheric grid delays (IGDs) and grid ionospheric vertical errors (GIVEs) are computed based on the ionospheric threat model.

  7. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  8. Bayesian inversions of a dynamic vegetation model at four European grassland sites

    NASA Astrophysics Data System (ADS)

    Minet, J.; Laloy, E.; Tychon, B.; Francois, L.

    2015-05-01

    Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB (CARbon Assimilation In the Biosphere) dynamic vegetation model (DVM) with 10 unknown parameters, using the DREAM(ZS) (DiffeRential Evolution Adaptive Metropolis) Markov chain Monte Carlo (MCMC) sampler. We focus on comparing model inversions, considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a priori or jointly inferred together with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root mean square errors (RMSEs) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19, 1.04 to 1.56 g C m-2 day-1 and 0.50 to 1.28 mm day-1, respectively. For the calibration period, using a homoscedastic eddy covariance residual error model resulted in a better agreement between measured and modelled data than using a heteroscedastic residual error model. However, a model validation experiment showed that CARAIB models calibrated considering heteroscedastic residual errors perform better. Posterior parameter distributions derived from using a heteroscedastic model of the residuals thus appear to be more robust. This is the case even though the classical linear heteroscedastic error model assumed herein did not fully remove heteroscedasticity of the GPP residuals. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides the residual error treatment, differences between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics.

  9. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    PubMed Central

    Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J

    2009-01-01

    Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices) in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression). The eigenfunction values from the spatial configuration matrices were then used to define expectations for prior distributions using a Markov chain Monte Carlo (MCMC) algorithm. A set of posterior means were defined in WinBUGS 1.4.3®. After the model had converged, samples from the conditional distributions were used to summarize the posterior distribution of the parameters. Thereafter, a spatial residual trend analyses was used to evaluate variance uncertainty propagation in the model using an autocovariance error matrix. Results By specifying coefficient estimates in a Bayesian framework, the covariate number of tillers was found to be a significant predictor, positively associated with An. arabiensis aquatic habitats. The spatial filter models accounted for approximately 19% redundant locational information in the ecological sampled An. arabiensis aquatic habitat data. In the residual error estimation model there was significant positive autocorrelation (i.e., clustering of habitats in geographic space) based on log-transformed larval/pupal data and the sampled covariate depth of habitat. Conclusion An autocorrelation error covariance matrix and a spatial filter analyses can prioritize mosquito control strategies by providing a computationally attractive and feasible description of variance uncertainty estimates for correctly identifying clusters of prolific An. arabiensis aquatic habitats based on larval/pupal productivity. PMID:19772590

  10. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  11. The U.S. Air Force Photorefractive Keratectomy (PRK) Study: Evaluation of Residual Refractive Error and High- and Low-Contrast Visual Acuity

    DTIC Science & Technology

    2006-07-01

    values for statistical analyses in terms of Snellen equivalent VA (Ref 44) and lines gained vs . lost after PRK . The Snellen VA values shown in the...AFRL-SA-BR-TR-2010-0011 THE U.S. AIR FORCE PHOTOREFRACTIVE KERATECTOMY ( PRK ) STUDY: Evaluation of Residual Refractive Error and High...July 2006 4. TITLE AND SUBTITLE THE U.S. AIR FORCE PHOTOREFRACTIVE KERATECTOMY ( PRK ) STUDY: Evaluation of Residual Refractive Error and High- and

  12. Residual volume on land and when immersed in water: effect on percent body fat.

    PubMed

    Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu

    2006-08-01

    There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P < 0.05). The limits of agreement for residual volumes in both conditions using Bland-Altman plots were -0.430 to 0.508 litres. This range was larger than the trial-to-trial error of residual volume on land (-0.260 to 0.304 litres). Moreover, the relationship between percent body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P < 0.0001), and the errors were approximately -6 to 4% (limits of agreement for percent body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.

  13. Fusing metabolomics data sets with heterogeneous measurement errors

    PubMed Central

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  14. X-ray dual energy spectral parameter optimization for bone Calcium/Phosphorus mass ratio estimation

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, P. I.; Fountos, G. P.; Martini, N. D.; Koukou, V. N.; Michail, C. M.; Valais, I. G.; Kandarakis, I. S.; Nikiforidis, G. C.

    2015-09-01

    Calcium (Ca) and Phosphorus (P) bone mass ratio has been identified as an important, yet underutilized, risk factor in osteoporosis diagnosis. The purpose of this simulation study is to investigate the use of effective or mean mass attenuation coefficient in Ca/P mass ratio estimation with the use of a dual-energy method. The investigation was based on the minimization of the accuracy of Ca/P ratio, with respect to the Coefficient of Variation of the ratio. Different set-ups were examined, based on the K-edge filtering technique and single X-ray exposure. The modified X-ray output was attenuated by various Ca/P mass ratios resulting in nine calibration points, while keeping constant the total bone thickness. The simulated data were obtained considering a photon counting energy discriminating detector. The standard deviation of the residuals was used to compare and evaluate the accuracy between the different dual energy set-ups. The optimum mass attenuation coefficient for the Ca/P mass ratio estimation was the effective coefficient in all the examined set-ups. The variation of the residuals between the different set-ups was not significant.

  15. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  16. Stack Number Influence on the Accuracy of Aster Gdem (V2)

    NASA Astrophysics Data System (ADS)

    Mirzadeh, S. M. J.; Alizadeh Naeini, A.; Fatemi, S. B.

    2017-09-01

    In this research, the influence of stack number (STKN) on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad) were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.

  17. Application of ultra-high pressure liquid chromatography linear ion-trap orbitrap to qualitative and quantitative assessment of pesticide residues.

    PubMed

    Farré, M; Picó, Y; Barceló, D

    2014-02-07

    The analysis of pesticides residues using a last generation high resolution and high mass accuracy hybrid linear ion trap-Orbitrap mass spectrometer (LTQ-Orbitrap-MS) was explored. Pesticides were extracted from fruits, fish, bees and sediments by QuEChERS and from water by solid-phase with Oasis HLB cartridges. Ultra-high pressure liquid chromatography (UHPLC)-LTQ-Orbitrap mass spectrometer acquired full scan MS data for quantification, and data dependent (dd) MS(2) and MS(3) product ion spectra for identification and/or confirmation. The regression coefficients (r(2)) for the calibration curves (two order of magnitude up to the lowest calibration level) in the study were ≥0.99. The LODs for 54 validated compounds were ≤2ngmL(-1) (analytical standards). The relative standard deviation (RSD), which was used to estimate precision, was always lower than 22%. The recovery of extraction and matrix effects ranged from 58 to 120% and from -92 to 52%, respectively. Mass accuracy was always ≤4ppm, corresponding to a maximum mass error of 1.6millimass units (mmu). This procedure was then successfully applied to pesticide residues in a set of the above-mentioned food and environmental samples. In addition to target analytes, this method enables the simultaneous detection/identification of non-target pesticides, pharmaceuticals, drugs of abuse, mycotoxins, and their metabolites. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Motor-Based Treatment with and without Ultrasound Feedback for Residual Speech-Sound Errors

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2017-01-01

    Background: There is a need to develop effective interventions and to compare the efficacy of different interventions for children with residual speech-sound errors (RSSEs). Rhotics (the r-family of sounds) are frequently in error American English-speaking children with RSSEs and are commonly targeted in treatment. One treatment approach involves…

  19. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data

    PubMed Central

    Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.

    2017-01-01

    We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404

  20. Factors affecting the use of 13Cα chemical shifts to determine, refine, and validate protein structures

    PubMed Central

    Vila, Jorge A.; Scheraga, Harold A.

    2008-01-01

    Interest centers here on the analysis of two different, but related, phenomena that affect side-chain conformations and consequently 13Cα chemical shifts and their applications to determine, refine, and validate protein structures. The first is whether 13Cα chemical shifts, computed at the DFT level of approximation with charged residues is a better approximation of observed 13Cα chemical shifts than those computed with neutral residues for proteins in solution. Accurate computation of 13Cα chemical shifts requires a proper representation of the charges, which might not take on integral values. For this analysis, the charges for 139 conformations of the protein ubiquitin were determined by explicit consideration of protein binding equilibria, at a given pH, that is, by exploring the 2ξ possible ionization states of the whole molecule, with ξ being the number of ionizable groups. The results of this analysis, as revealed by the shielding/deshield-ing of the 13Cα nucleus, indicated that: (i) there is a significant difference in the computed 13Cα chemical shifts, between basic and acidic groups, as a function of the degree of charge of the side chain; (ii) this difference is attributed to the distance between the ionizable groups and the 13Cα nucleus, which is shorter for the acidic Asp and Glu groups as compared with that for the basic Lys and Arg groups; and (iii) the use of neutral, rather than charged, basic and acidic groups is a better approximation of the observed 13Cα chemical shifts of a protein in solution. The second is how side-chain flexibility influences computed 13Cα chemical shifts in an additional set of ubiquitin conformations, in which the side chains are generated from an NMR-derived structure with the backbone conformation assumed to be fixed. The 13Cα chemical shift of a given amino acid residue in a protein is determined, mainly, by its own backbone and side-chain torsional angles, independent of the neighboring residues; the conformation of a given residue itself, however, depends on the environment of this residue and, hence, on the whole protein structure. As a consequence, this analysis reveals the role and impact of an accurate side-chain computation in the determination and refinement of protein conformation. The results of this analysis are: (i) a lower error between computed and observed 13Cα chemical shifts (by up to 3.7 ppm), was found for ~68% and ~63% of all ionizable residues and all non-Ala/Pro/Gly residues, respectively, in the additional set of conformations, compared with results for the model from which the set was derived; and (ii) all the additional conformations exhibit a lower root-mean-square-deviation (1.97 ppm ≤ rmsd ≤ 2.13 ppm), between computed and observed 13Cα chemical shifts, than the rmsd (2.32 ppm) computed for the starting conformation from which this additional set was derived. As a validation test, an analysis of the additional set of ubiquitin conformations, comparing computed and observed values of both 13Cα chemical shifts and χ1 torsional angles (given by the vicinal coupling constants, 3JN–Cγ and 3JC′–Cγ, is discussed. PMID:17975838

  1. Management of moderate and severe corneal astigmatism with AcrySof® toric intraocular lens implantation – Our experience

    PubMed Central

    Farooqui, Javed Hussain; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo

    2015-01-01

    Purpose Visual performance following toric intraocular lens implantation for cataract with moderate and severe astigmatism. Setting Cataract services, Shroff Eye Centre, New Delhi, India. Design Case series. Method This prospective study included 64 eyes of 40 patients with more than 1.50 dioptre (D) of pre-existing corneal astigmatism undergoing phacoemulsification with implantation of the AcrySof® toric IntraOcular Lens (IOL). The unaided visual acuity (UCVA), best corrected visual acuity (BCVA), residual refractive sphere and refractive cylinders were evaluated. Toric IOL axis and alignment error was measured by slit lamp method and Adobe Photoshop (version 7) method. Patient satisfaction was evaluated using a satisfaction questionnaire at 3 months. Results The mean residual refractive astigmatism was 0.57 D at the final follow-up of 3 months. Mean alignment error was 3.44 degrees (SD = 2.60) by slit lamp method and 3.88 degrees (SD = 2.86) by Photoshop method. Forty-six (71.9%) eyes showed misalignment of 5 degrees or less, and 60 (93.8%) eyes showed misalignment of 10 degrees or less. The mean log MAR UCVA at 1st post-op day was 0.172 (SD = 0.02), on 7th post-op day was 0.138 (SD = 0.11), and on 30th post-op day was 0.081 (SD = 0.11). The mean log MAR BCVA at three months was −0.04 (SD = 0.76). Conclusion We believe that implantation of AcrySof® toric IOL is an effective, safe and predictable method to correct high amounts of corneal astigmatism during cataract surgery. PMID:26586976

  2. Optical digital to analog conversion performance analysis for indoor set-up conditions

    NASA Astrophysics Data System (ADS)

    Dobesch, Aleš; Alves, Luis Nero; Wilfert, Otakar; Ribeiro, Carlos Gaspar

    2017-10-01

    In visible light communication (VLC) the optical digital to analog conversion (ODAC) approach was proposed as a suitable driving technique able to overcome light-emitting diode's (LED) non-linear characteristic. This concept is analogous to an electrical digital-to-analog converter (EDAC). In other words, digital bits are binary weighted to represent an analog signal. The method supports elementary on-off based modulations able to exploit the essence of LED's non-linear characteristic allowing simultaneous lighting and communication. In the ODAC concept the reconstruction error does not simply rely upon the converter bit depth as in case of EDAC. It rather depends on communication system set-up and geometrical relation between emitter and receiver as well. The paper describes simulation results presenting the ODAC's error performance taking into account: the optical channel, the LED's half power angle (HPA) and the receiver field of view (FOV). The set-up under consideration examines indoor conditions for a square room with 4 m length and 3 m height, operating with one dominant wavelength (blue) and having walls with a reflection coefficient of 0.8. The achieved results reveal that reconstruction error increases for higher data rates as a result of interference due to multipath propagation.

  3. Automated detection and quantification of residual brain tumor using an interactive computer-aided detection scheme

    NASA Astrophysics Data System (ADS)

    Gaffney, Kevin P.; Aghaei, Faranak; Battiste, James; Zheng, Bin

    2017-03-01

    Detection of residual brain tumor is important to evaluate efficacy of brain cancer surgery, determine optimal strategy of further radiation therapy if needed, and assess ultimate prognosis of the patients. Brain MR is a commonly used imaging modality for this task. In order to distinguish between residual tumor and surgery induced scar tissues, two sets of MRI scans are conducted pre- and post-gadolinium contrast injection. The residual tumors are only enhanced in the post-contrast injection images. However, subjective reading and quantifying this type of brain MR images faces difficulty in detecting real residual tumor regions and measuring total volume of the residual tumor. In order to help solve this clinical difficulty, we developed and tested a new interactive computer-aided detection scheme, which consists of three consecutive image processing steps namely, 1) segmentation of the intracranial region, 2) image registration and subtraction, 3) tumor segmentation and refinement. The scheme also includes a specially designed and implemented graphical user interface (GUI) platform. When using this scheme, two sets of pre- and post-contrast injection images are first automatically processed to detect and quantify residual tumor volume. Then, a user can visually examine segmentation results and conveniently guide the scheme to correct any detection or segmentation errors if needed. The scheme has been repeatedly tested using five cases. Due to the observed high performance and robustness of the testing results, the scheme is currently ready for conducting clinical studies and helping clinicians investigate the association between this quantitative image marker and outcome of patients.

  4. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  5. Identifying and reducing error in cluster-expansion approximations of protein energies.

    PubMed

    Hahn, Seungsoo; Ashenberg, Orr; Grigoryan, Gevorg; Keating, Amy E

    2010-12-01

    Protein design involves searching a vast space for sequences that are compatible with a defined structure. This can pose significant computational challenges. Cluster expansion is a technique that can accelerate the evaluation of protein energies by generating a simple functional relationship between sequence and energy. The method consists of several steps. First, for a given protein structure, a training set of sequences with known energies is generated. Next, this training set is used to expand energy as a function of clusters consisting of single residues, residue pairs, and higher order terms, if required. The accuracy of the sequence-based expansion is monitored and improved using cross-validation testing and iterative inclusion of additional clusters. As a trade-off for evaluation speed, the cluster-expansion approximation causes prediction errors, which can be reduced by including more training sequences, including higher order terms in the expansion, and/or reducing the sequence space described by the cluster expansion. This article analyzes the sources of error and introduces a method whereby accuracy can be improved by judiciously reducing the described sequence space. The method is applied to describe the sequence-stability relationship for several protein structures: coiled-coil dimers and trimers, a PDZ domain, and T4 lysozyme as examples with computationally derived energies, and SH3 domains in amphiphysin-1 and endophilin-1 as examples where the expanded pseudo-energies are obtained from experiments. Our open-source software package Cluster Expansion Version 1.0 allows users to expand their own energy function of interest and thereby apply cluster expansion to custom problems in protein design. © 2010 Wiley Periodicals, Inc.

  6. On the role of covariance information for GRACE K-band observations in the Celestial Mechanics Approach

    NASA Astrophysics Data System (ADS)

    Bentel, Katrin; Meyer, Ulrich; Arnold, Daniel; Jean, Yoomin; Jäggi, Adrian

    2017-04-01

    The Astronomical Institute at the University of Bern (AIUB) derives static and time-variable gravity fields by means of the Celestial Mechanics Approach (CMA) from GRACE (level 1B) data. This approach makes use of the close link between orbit and gravity field determination. GPS-derived kinematic GRACE orbit positions, inter-satellite K-band observations, which are the core observations of GRACE, and accelerometer data are combined to rigorously estimate orbit and spherical harmonic gravity field coefficients in one adjustment step. Pseudo-stochastic orbit parameters are set up to absorb unmodeled noise. The K-band range measurements in along-track direction lead to a much higher correlation of the observations in this direction compared to the other directions and thus, to north-south stripes in the unconstrained gravity field solutions, so-called correlated errors. By using a full covariance matrix for the K-band observations the correlation can be taken into account. One possibility is to derive correlation information from post-processing K-band residuals. This is then used in a second iteration step to derive an improved gravity field solution. We study the effects of pre-defined covariance matrices and residual-derived covariance matrices on the final gravity field product with the CMA.

  7. Highly correlated configuration interaction calculations on water with large orbital bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almora-Díaz, César X., E-mail: xalmora@fisica.unam.mx

    2014-05-14

    A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupledmore » cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)« less

  8. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  9. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  10. Recent advances in electronic structure theory and their influence on the accuracy of ab initio potential energy surfaces

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.

  11. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    PubMed Central

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  12. Orbit/attitude estimation with LANDSAT Landmark data

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Waligora, S.

    1979-01-01

    The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.

  13. Revision of earthquake hypocentre locations in global bulletin data sets using source-specific station terms

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten

    2017-02-01

    Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that our relocation process significantly improves the hypocentre locations compared to standard locations.

  14. Towards Automated Structure-Based NMR Resonance Assignment

    NASA Astrophysics Data System (ADS)

    Jang, Richard; Gao, Xin; Li, Ming

    We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.

  15. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.

    2009-01-01

    This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.

  16. Multilaboratory trial for determination of ceftiofur residues in bovine and swine kidney and muscle, and bovine milk.

    PubMed

    Hornish, Rex E; Hamlow, Philip J; Brown, Scott A

    2003-01-01

    A multilaboratory trial for determining ceftiofur-related residues in bovine and swine kidney and muscle, and bovine milk was conducted following regulatory guidelines of the U.S. Food and Drug Administration, Center for Veterinary Medicine. The methods convert all desfuroylceftiofur-related residues containing the intact beta-lactam ring to desfuroylceftiofur acetamide to establish ceftiofur residues in tissues. Four laboratories analyzed 5 sets of samples for each tissue. Each sample set consisted of a control/blank sample and 3 control samples fortified with ceftiofur at 0.5 Rm, Rm, and 2 Rm, respectively, where Rm is the U.S. tolerance assigned for ceftiofur residue in each tissue/matrix: 0.100 microg/mL for milk, 8.0 microg/g for kidney (both species), 1.0 microg/g for bovine muscle, and 2.0 microg/g for swine muscle. Each sample set also contained 2 samples of incurred-residue tissues (one > Rm and one < Rm) from animals treated with ceftiofur hydrochloride. All laboratories completed the method trial after a familiarization phase and test of system suitability in which they demonstrated > 80% recovery in pretrial fortified test samples. Results showed that the methods met all acceptable performance criteria for recovery, accuracy, and precision. Although sample preparation was easy, solid-phase extraction cartridge performance must be carefully evaluated before samples are processed. The liquid chromatography detection system was easily set up; however, the elution profile may require slight modifications. The procedures could clearly differentiate between violative (> Rm) and nonviolative (< Rm) ceftiofur residues. Participating laboratories found the procedures suitable for ceftiofur residue determination.

  17. Error decomposition and estimation of inherent optical properties.

    PubMed

    Salama, Mhd Suhyb; Stein, Alfred

    2009-09-10

    We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.

  18. On the deterministic and stochastic use of hydrologic models

    USGS Publications Warehouse

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  19. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  20. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Gaarder, N. T.; Lin, S.

    1986-01-01

    This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.

  1. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  2. Critical methodological factors in diagnosing minimal residual disease in hematological malignancies using quantitative PCR.

    PubMed

    Nyvold, Charlotte Guldborg

    2015-05-01

    Hematological malignancies are a heterogeneous group of cancers with respect to both presentation and prognosis, and many subtypes are nowadays associated with aberrations that make up excellent molecular targets for the quantification of minimal residual disease. The quantitative PCR methodology is outstanding in terms of sensitivity, specificity and reproducibility and thus an excellent choice for minimal residual disease assessment. However, the methodology still has pitfalls that should be carefully considered when the technique is integrated in a clinical setting.

  3. Weighted triangulation adjustment

    USGS Publications Warehouse

    Anderson, Walter L.

    1969-01-01

    The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.

  4. Protein backbone and sidechain torsion angles predicted from NMR chemical shifts using artificial neural networks

    PubMed Central

    Shen, Yang; Bax, Ad

    2013-01-01

    A new program, TALOS-N, is introduced for predicting protein backbone torsion angles from NMR chemical shifts. The program relies far more extensively on the use of trained artificial neural networks than its predecessor, TALOS+. Validation on an independent set of proteins indicates that backbone torsion angles can be predicted for a larger, ≥ 90% fraction of the residues, with an error rate smaller than ca 3.5%, using an acceptance criterion that is nearly two-fold tighter than that used previously, and a root mean square difference between predicted and crystallographically observed (φ,ψ) torsion angles of ca 12°. TALOS-N also reports sidechain χ1 rotameric states for about 50% of the residues, and a consistency with reference structures of 89%. The program includes a neural network trained to identify secondary structure from residue sequence and chemical shifts. PMID:23728592

  5. Liquid chromatography-tandem mass spectrometry multiresidue method for the analysis of quaternary ammonium compounds in cheese and milk products: Development and validation using the total error approach.

    PubMed

    Slimani, Kahina; Féret, Aurélie; Pirotais, Yvette; Maris, Pierre; Abjean, Jean-Pierre; Hurtaud-Pessel, Dominique

    2017-09-29

    Quaternary ammonium compounds (QACs) are both cationic surfactants and biocidal substances widely used as disinfectants in the food industry. A sensitive and reliable method for the analysis of benzalkonium chlorides (BACs) and dialkyldimethylammonium chlorides (DDACs) has been developed that enables the simultaneous quantitative determination of ten quaternary ammonium residues in dairy products below the provisional maximum residue level (MRL), set at 0.1mgkg -1 . To the best of our knowledge, this method could be the one applicable to milk and to three major processed milk products selected, namely processed or hard pressed cheeses, and whole milk powder. The method comprises solvent extraction using a mixture of acetonitrile and ethyl acetate, without any further clean-up. Analyses were performed by liquid chromatography coupled with electrospray tandem mass spectrometry detection (LC-ESI-MS/MS) operating in positive mode. A C18 analytical column was used for chromatographic separation, with a mobile phase composed of acetonitrile and water both containing 0.3% formic acid; and methanol in the gradient mode. Five deuterated internal standards were added to obtain the most accurate quantification. Extraction recoveries were satisfactory and no matrix effects were observed. The method was validated using the total error approach in accordance with the NF V03-110 standard in order to characterize the trueness, repeatability, intermediate precision and analytical limits within the range of 5-150μgkg -1 for all matrices. These performance criteria, calculated by e.noval ® 3.0 software, were satisfactory and in full accordance with the proposed provisional MRL and with the recommendations in the European Union SANTE/11945/2015 regulatory guidelines. The limit of detection (LOD) was low (<1.9μgkg -1 ) and the limit of quantification (LOQ) ranged from 5μgkg -1 to 35μgkg -1 for all matrices depending on the analytes. The validation results proved that the method is suitable for quantifying quaternary ammoniums in foodstuffs from dairy industries at residue levels, and could be used for biocide residues monitoring plans and to measure the exposition consumer to biocides products. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Mark; Tuen Mun Hospital, Hong Kong; Grehn, Melanie

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with themore » original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.« less

  7. COMPARISON OF LAPAROSCOPIC SKILLS PERFORMANCE USING SINGLE-SITE ACCESS (SSA) DEVICES VS. AN INDEPENDENT-PORT SSA APPROACH

    PubMed Central

    Schill, Matthew R.; Varela, J. Esteban; Frisella, Margaret M.; Brunt, L. Michael

    2015-01-01

    Background We compared performance of validated laparoscopic tasks on four commercially available single site access (SSA) access devices (AD) versus an independent port (IP) SSA set-up. Methods A prospective, randomized comparison of laparoscopic skills performance on four AD (GelPOINT™, SILS™ Port, SSL Access System™, TriPort™) and one IP SSA set-up was conducted. Eighteen medical students (2nd–4th year), four surgical residents, and five attending surgeons were trained to proficiency in multi-port laparoscopy using four laparoscopic drills (peg transfer, bean drop, pattern cutting, extracorporeal suturing) in a laparoscopic trainer box. Drills were then performed in random order on each IP-SSA and AD-SSA set-up using straight laparoscopic instruments. Repetitions were timed and errors recorded. Data are mean ± SD, and statistical analysis was by two-way ANOVA with Tukey HSD post-hoc tests. Results Attending surgeons had significantly faster total task times than residents or students (p< 0.001), but the difference between residents and students was NS. Pair-wise comparisons revealed significantly faster total task times for the IP-SSA set-up compared to all four AD-SSA’s within the student group only (p<0.05). Total task times for residents and attending surgeons showed a similar profile, but the differences were NS. When data for the three groups was combined, the total task time was less for the IP-SSA set-up than for each of the four AD-SSA set-ups (p < 0.001). Similarly,, the IP-SSA set-up was significantly faster than 3 of 4 AD-SSA set-ups for peg transfer, 3 of 4 for pattern cutting, and 2 of 4 for suturing. No significant differences in error rates between IP-SSA and AD-SSA set-ups were detected. Conclusions When compared to an IP-SSA laparoscopic set-up, single site access devices are associated with longer task performance times in a trainer box model, independent of level of training. Task performance was similar across different SSA devices. PMID:21993938

  8. Performance optimization of a bendable parabolic cylinder collimating X-ray mirror for the ALS micro-XAS beamline 10.3.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V.; Morrison, Gregory Y.; Marcus, Matthew A.

    The Advanced Light Source (ALS) beamline (BL) 10.3.2 is an apparatus for X-ray microprobe spectroscopy and diffraction experiments, operating in the energy range 2.4–17 keV. The performance of the beamline, namely the spatial and energy resolutions of the measurements, depends significantly on the collimation quality of light incident on the monochromator. In the BL 10.3.2 end-station, the synchrotron source is imaged 1:1 onto a set of roll slits which form a virtual source. The light from this source is collimated in the vertical direction by a bendable parabolic cylinder mirror. Details are presented of the mirror design, which allows formore » precision assembly, alignment and shaping of the mirror, as well as for extending of the mirror operating lifetime by a factor of ~10. Assembly, mirror optimal shaping and preliminary alignment were performed ex situ in the ALS X-ray Optics Laboratory (XROL). Using an original method for optimal ex situ characterization and setting of bendable X-ray optics developed at the XROL, a root-mean-square (RMS) residual surface slope error of 0.31 µrad with respect to the desired parabola, and an RMS residual height error of less than 3 nm were achieved. Once in place at the beamline, deviations from the designed optical geometry ( e.g. due to the tolerances for setting the distance to the virtual source, the grazing incidence angle, the transverse position) and/or mirror shape ( e.g. due to a heat load deformation) may appear. Due to the errors, on installation the energy spread from the monochromator is typically a few electron-volts. Here, a new technique developed and successfully implemented for at-wavelength ( in situ) fine optimal tuning of the mirror, enabling us to reduce the collimation-induced energy spread to ~0.05 eV, is described.« less

  9. Performance optimization of a bendable parabolic cylinder collimating X-ray mirror for the ALS micro-XAS beamline 10.3.2

    PubMed Central

    Yashchuk, Valeriy V.; Morrison, Gregory Y.; Marcus, Matthew A.; Domning, Edward E.; Merthe, Daniel J.; Salmassi, Farhad; Smith, Brian V.

    2015-01-01

    The Advanced Light Source (ALS) beamline (BL) 10.3.2 is an apparatus for X-ray microprobe spectroscopy and diffraction experiments, operating in the energy range 2.4–17 keV. The performance of the beamline, namely the spatial and energy resolutions of the measurements, depends significantly on the collimation quality of light incident on the monochromator. In the BL 10.3.2 end-station, the synchrotron source is imaged 1:1 onto a set of roll slits which form a virtual source. The light from this source is collimated in the vertical direction by a bendable parabolic cylinder mirror. Details are presented of the mirror design, which allows for precision assembly, alignment and shaping of the mirror, as well as for extending of the mirror operating lifetime by a factor of ∼10. Assembly, mirror optimal shaping and preliminary alignment were performed ex situ in the ALS X-ray Optics Laboratory (XROL). Using an original method for optimal ex situ characterization and setting of bendable X-ray optics developed at the XROL, a root-mean-square (RMS) residual surface slope error of 0.31 µrad with respect to the desired parabola, and an RMS residual height error of less than 3 nm were achieved. Once in place at the beamline, deviations from the designed optical geometry (e.g. due to the tolerances for setting the distance to the virtual source, the grazing incidence angle, the transverse position) and/or mirror shape (e.g. due to a heat load deformation) may appear. Due to the errors, on installation the energy spread from the monochromator is typically a few electron-volts. Here, a new technique developed and successfully implemented for at-wavelength (in situ) fine optimal tuning of the mirror, enabling us to reduce the collimation-induced energy spread to ∼0.05 eV, is described. PMID:25931083

  10. Performance optimization of a bendable parabolic cylinder collimating X-ray mirror for the ALS micro-XAS beamline 10.3.2

    DOE PAGES

    Yashchuk, Valeriy V.; Morrison, Gregory Y.; Marcus, Matthew A.; ...

    2015-04-08

    The Advanced Light Source (ALS) beamline (BL) 10.3.2 is an apparatus for X-ray microprobe spectroscopy and diffraction experiments, operating in the energy range 2.4–17 keV. The performance of the beamline, namely the spatial and energy resolutions of the measurements, depends significantly on the collimation quality of light incident on the monochromator. In the BL 10.3.2 end-station, the synchrotron source is imaged 1:1 onto a set of roll slits which form a virtual source. The light from this source is collimated in the vertical direction by a bendable parabolic cylinder mirror. Details are presented of the mirror design, which allows formore » precision assembly, alignment and shaping of the mirror, as well as for extending of the mirror operating lifetime by a factor of ~10. Assembly, mirror optimal shaping and preliminary alignment were performed ex situ in the ALS X-ray Optics Laboratory (XROL). Using an original method for optimal ex situ characterization and setting of bendable X-ray optics developed at the XROL, a root-mean-square (RMS) residual surface slope error of 0.31 µrad with respect to the desired parabola, and an RMS residual height error of less than 3 nm were achieved. Once in place at the beamline, deviations from the designed optical geometry ( e.g. due to the tolerances for setting the distance to the virtual source, the grazing incidence angle, the transverse position) and/or mirror shape ( e.g. due to a heat load deformation) may appear. Due to the errors, on installation the energy spread from the monochromator is typically a few electron-volts. Here, a new technique developed and successfully implemented for at-wavelength ( in situ) fine optimal tuning of the mirror, enabling us to reduce the collimation-induced energy spread to ~0.05 eV, is described.« less

  11. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  12. Ozone Profile Retrievals from the OMPS on Suomi NPP

    NASA Astrophysics Data System (ADS)

    Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.

    2017-12-01

    We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.

  13. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data.

    PubMed

    Parks, David R; El Khettabi, Faysal; Chase, Eric; Hoffman, Robert A; Perfetto, Stephen P; Spidlen, Josef; Wood, James C S; Moore, Wayne A; Brinkman, Ryan R

    2017-03-01

    We developed a fully automated procedure for analyzing data from LED pulses and multilevel bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than that from multilevel bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  14. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  15. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  16. 25+ Years of the Hubble Space Telescope and a Simple Error That Cost Millions

    ERIC Educational Resources Information Center

    Shakerin, Said

    2016-01-01

    A simple mistake in properly setting up a measuring device caused millions of dollars to be spent in correcting the initial optical failure of the Hubble Space Telescope (HST). This short article is intended as a lesson for a physics laboratory and discussion of errors in measurement.

  17. Practical guidance on representing the heteroscedasticity of residual errors of hydrological predictions

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George

    2016-04-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).

  18. The 'Soil Cover App' - a new tool for fast determination of dead and living biomass on soil

    NASA Astrophysics Data System (ADS)

    Bauer, Thomas; Strauss, Peter; Riegler-Nurscher, Peter; Prankl, Johann; Prankl, Heinrich

    2017-04-01

    Worldwide many agricultural practices aim on soil protection strategies using living or dead biomass as soil cover. Especially for the case when management practices are focusing on soil erosion mitigation the effectiveness of these practices is directly driven by the amount of soil coverleft on the soil surface. Hence there is a need for quick and reliable methods of soil cover estimation not only for living biomass but particularly for dead biomass (mulch). Available methods for the soil cover measurement are either subjective, depending on an educated guess or time consuming, e.g., if the image is analysed manually at grid points. We therefore developed a mobile application using an algorithm based on entangled forest classification. The final output of the algorithm gives classified labels for each pixel of the input image as well as the percentage of each class which are living biomass, dead biomass, stones and soil. Our training dataset consisted of more than 250 different images and their annotated class information. Images have been taken in a set of different environmental conditions such as light, soil coverages from between 0% to 100%, different materials such as living plants, residues, straw material and stones. We compared the results provided by our mobile application with a data set of 180 images that had been manually annotated A comparison between both methods revealed a regression slope of 0.964 with a coefficient of determination R2 = 0.92, corresponding to an average error of about 4%. While average error of living plant classification was about 3%, dead residue classification resulted in an 8% error. Thus the new mobile application tool offers a fast and easy way to obtain information on the protective potential of a particular agricultural management site.

  19. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    NASA Astrophysics Data System (ADS)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  20. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  1. Precise orbit determination using the batch filter based on particle filtering with genetic resampling approach

    NASA Astrophysics Data System (ADS)

    Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul

    2014-09-01

    In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.

  2. Efficacy of Visual-Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study

    ERIC Educational Resources Information Center

    Byun, Tara McAllister

    2017-01-01

    Purpose: This study documented the efficacy of visual-acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of…

  3. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    NASA Astrophysics Data System (ADS)

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  4. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  5. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  6. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  7. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  8. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    PubMed

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  9. Virtual occlusal definition for orthognathic surgery.

    PubMed

    Liu, X J; Li, Q Q; Zhang, Z; Li, T T; Xie, Z; Zhang, Y

    2016-03-01

    Computer-assisted surgical simulation is being used increasingly in orthognathic surgery. However, occlusal definition is still undertaken using model surgery with subsequent digitization via surface scanning or cone beam computed tomography. A software tool has been developed and a workflow set up in order to achieve a virtual occlusal definition. The results of a validation study carried out on 60 models of normal occlusion are presented. Inter- and intra-user correlation tests were used to investigate the reproducibility of the manual setting point procedure. The errors between the virtually set positions (test) and the digitized manually set positions (gold standard) were compared. The consistency in virtual set positions performed by three individual users was investigated by one way analysis of variance test. Inter- and intra-observer correlation coefficients for manual setting points were all greater than 0.95. Overall, the median error between the test and the gold standard positions was 1.06mm. Errors did not differ among teeth (F=0.371, P>0.05). The errors were not significantly different from 1mm (P>0.05). There were no significant differences in the errors made by the three independent users (P>0.05). In conclusion, this workflow for virtual occlusal definition was found to be reliable and accurate. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  11. A cryogenic 'set-and-forget' deformable mirror

    NASA Astrophysics Data System (ADS)

    Trines, Robin; Janssen, Huub; Paalvast, Sander; Teuwen, Maurice; Brandl, Bernhard; Rodenhuis, Michiel

    2016-07-01

    This paper discusses the development, realization and initial characterization of a demonstrator for a cryogenic 'set and forget' deformable mirror. Many optical and cryogenic infrared instruments on modern very and extremely large telescopes aim at diffraction-limited performance and require total wave front errors in the order of 50 nanometers or less. At the same time, their complex optical functionality requires either a large number of spherical mirrors or several complex free-form mirrors. Due to manufacturing and alignment tolerances, each mirror contributes static aberrations to the wave front. Many of these aberrations are not known in the design phase and can only be measured once the system has been assembled. A 'set-and-forget' deformable mirror can be used to compensate for these aberrations, making it especially interesting for systems with complex free-form mirrors or cryogenic systems where access to iterative realignment is very difficult or time consuming. The mirror with an optical diameter of 200 mm is designed to correct wave front aberrations of up to 2 μm root-mean square (rms). The shape of the wave front is approximated by the first 15 Zernike modes. Finite element analysis of the mirror shows a theoretically possible reduction of the wave front error from 2 μm to 53 nm rms. To produce the desired shapes, the mirror surface is controlled by 19 identical actuator modules at the back of the mirror. The actuator modules use commercially available Piezo-Knob actuators with a high technology readiness level (TRL). These provide nanometer resolution at cryogenic temperatures combined with high positional stability, and allow for the system to be powered off once the desired shape is obtained. The stiff design provides a high resonance frequency (>200 Hz) to suppress external disturbances. A full-size demonstrator of the deformable mirror containing 6 actuators and 13 dummy actuators is realized and characterized. Measurement results show that the actuators can provide sufficient stroke to correct the 2 μm rms WFE. The resolution of the actuator influence functions is found to be 0.24 nm rms or better depending on the position of the actuator within the grid. Superposition of the actuator influence functions shows that a 2 μm rms WFE can be accurately corrected with a 38 nm fitting error. Due to the manufacturing method of the demonstrator an artificially large print-through error of 182 nm is observed. The main cause of this print-through error has been identified and will be reduced in future design iterations. After these design changes the system is expected to have a total residual error of less than 70 nm and offer diffraction limited performance (λ14) for wavelengths of 1 μm and above.

  12. A new nondestructive instrument for bulk residual stress measurement using tungsten kα1 X-ray.

    PubMed

    Ma, Ce; Dou, Zuo-Yong; Chen, Li; Li, Yun; Tan, Xiao; Dong, Ping; Zhang, Jin; Zheng, Lin; Zhang, Peng-Cheng

    2016-11-01

    We describe an experimental instrument used for measuring nondestructively the residual stress using short wavelength X-ray, tungsten k α1 . By introducing a photon energy screening technology, the monochromatic X-ray diffraction of tungsten k α1 was realized using a CdTe detector. A high precision Huber goniometer is utilized in order to reduce the error in residual stress measurement. This paper summarizes the main performance of this instrument, measurement depth, stress error, as opposed to the neutron diffraction measurements of residual stress. Here, we demonstrate an application on the determination of residual stress in an aluminum alloy welded by the friction stir welding.

  13. Planck 2013 results. VII. HFI time response and beams

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bowyer, J. W.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Haissinski, J.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacTavish, C. J.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matsumura, T.; Matthai, F.; Mazzotta, P.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polegre, A. M.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is theangular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5°) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five error eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multipole ℓ ~ 1500, and below 0.5% at 143 and 217 GHz up to ℓ ~ 2000.

  14. Optimal slice thickness for cone-beam CT with on-board imager

    PubMed Central

    Seet, KYT; Barghi, A; Yartsev, S; Van Dyk, J

    2010-01-01

    Purpose: To find the optimal slice thickness (Δτ) setting for patient registration with kilovoltage cone-beam CT (kVCBCT) on the Varian On Board Imager (OBI) system by investigating the relationship of slice thickness to automatic registration accuracy and contrast-to-noise ratio. Materials and method: Automatic registration was performed on kVCBCT studies of the head and pelvis of a RANDO anthropomorphic phantom. Images were reconstructed with 1.0 ≤ Δτ (mm) ≤ 5.0 at 1.0 mm increments. The phantoms were offset by a known amount, and the suggested shifts were compared to the known shifts by calculating the residual error. A uniform cylindrical phantom with cylindrical inserts of various known CT numbers was scanned with kVCBCT at 1.0 ≤ Δτ (mm) ≤ 5.0 at increments of 0.5 mm. The contrast-to-noise ratios for the inserts were measured at each Δτ. Results: For the planning CT slice thickness used in this study, there was no significant difference in residual error below a threshold equal to the planning CT slice thickness. For Δτ > 3.0 mm, residual error increased for both the head and pelvis phantom studies. The contrast-to-noise ratio is proportional to slice thickness until Δτ = 2.5 mm. Beyond this point, the contrast-to-noise ratio was not affected by Δτ. Conclusion: Automatic registration accuracy is greatest when 1.0 ≤ Δτ (mm) ≤ 3.0 is used. Contrast-to-noise ratio is optimal for the 2.5 ≤ Δτ (mm) ≤ 5.0 range. Therefore 2.5 ≤ Δτ (mm) ≤ 3.0 is recommended for kVCBCT patient registration where the planning CT is 3.0 mm. PMID:21611047

  15. Resampling-based Methods in Single and Multiple Testing for Equality of Covariance/Correlation Matrices

    PubMed Central

    Yang, Yang; DeGruttola, Victor

    2016-01-01

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584

  16. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    PubMed

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  17. Validity of Three-Dimensional Photonic Scanning Technique for Estimating Percent Body Fat.

    PubMed

    Shitara, K; Kanehisa, H; Fukunaga, T; Yanai, T; Kawakami, Y

    2013-01-01

    Three-dimensional photonic scanning (3DPS) was recently developed to measure dimensions of a human body surface. The purpose of this study was to explore the validity of body volume measured by 3DPS for estimating the percent body fat (%fat). Design, setting, participants, and measurement: The body volumes were determined by 3DPS in 52 women. The body volume was corrected for residual lung volume. The %fat was estimated from body density and compared with the corresponding reference value determined by the dual-energy x-ray absorptiometry (DXA). No significant difference was found for the mean values of %fat obtained by 3DPS (22.2 ± 7.6%) and DXA (23.5 ± 4.9%). The root mean square error of %fat between 3DPS and reference technique was 6.0%. For each body segment, there was a significant positive correlation between 3DPS- and DXA-values, although the corresponding value for the head was slightly larger in 3DPS than in DXA. Residual lung volume was negatively correlated with the estimated error in %fat. The body volume determined with 3DPS is potentially useful for estimating %fat. A possible strategy for enhancing the measurement accuracy of %fat might be to refine the protocol for preparing the subject's hair prior to scanning and to improve the accuracy in the measurement of residual lung volume.

  18. Residual uncertainty estimation using instance-based learning with applications to hydrologic forecasting

    NASA Astrophysics Data System (ADS)

    Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.

    2017-08-01

    A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.

  19. Simulation and mitigation of higher-order ionospheric errors in PPP

    NASA Astrophysics Data System (ADS)

    Zus, Florian; Deng, Zhiguo; Wickert, Jens

    2017-04-01

    We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812

  20. Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

    PubMed

    Hinton-Bayre, Anton D

    2011-02-01

    There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.

  1. Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.

    PubMed

    Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S

    2008-10-01

    Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage.

  2. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  3. Medication errors in home care: a qualitative focus group study.

    PubMed

    Berland, Astrid; Bentsen, Signe Berit

    2017-11-01

    To explore registered nurses' experiences of medication errors and patient safety in home care. The focus of care for older patients has shifted from institutional care towards a model of home care. Medication errors are common in this situation and can result in patient morbidity and mortality. An exploratory qualitative design with focus group interviews was used. Four focus group interviews were conducted with 20 registered nurses in home care. The data were analysed using content analysis. Five categories were identified as follows: lack of information, lack of competence, reporting medication errors, trade name products vs. generic name products, and improving routines. Medication errors occur frequently in home care and can threaten the safety of patients. Insufficient exchange of information and poor communication between the specialist and home-care health services, and between general practitioners and healthcare workers can lead to medication errors. A lack of competence in healthcare workers can also lead to medication errors. To prevent these, it is important that there should be up-to-date information and communication between healthcare workers during the transfer of patients from specialist to home care. Ensuring competence among healthcare workers with regard to medication is also important. In addition, there should be openness and accurate reporting of medication errors, as well as in setting routines for the preparation, alteration and administration of medicines. To prevent medication errors in home care, up-to-date information and communication between healthcare workers is important when patients are transferred from specialist to home care. It is also important to ensure adequate competence with regard to medication, and that there should be openness when medication errors occur, as well as in setting routines for the preparation, alteration and administration of medications. © 2017 John Wiley & Sons Ltd.

  4. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis

    PubMed Central

    Gobiet, Andreas

    2016-01-01

    ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497

  5. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis.

    PubMed

    Prein, Andreas F; Gobiet, Andreas

    2017-01-01

    Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.

  6. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  7. Universal quantum gate set approaching fault-tolerant thresholds with superconducting qubits.

    PubMed

    Chow, Jerry M; Gambetta, Jay M; Córcoles, A D; Merkel, Seth T; Smolin, John A; Rigetti, Chad; Poletto, S; Keefe, George A; Rothwell, Mary B; Rozen, J R; Ketchen, Mark B; Steffen, M

    2012-08-10

    We use quantum process tomography to characterize a full universal set of all-microwave gates on two superconducting single-frequency single-junction transmon qubits. All extracted gate fidelities, including those for Clifford group generators, single-qubit π/4 and π/8 rotations, and a two-qubit controlled-not, exceed 95% (98%), without (with) subtracting state preparation and measurement errors. Furthermore, we introduce a process map representation in the Pauli basis which is visually efficient and informative. This high-fidelity gate set serves as a critical building block towards scalable architectures of superconducting qubits for error correction schemes and pushes up on the known limits of quantum gate characterization.

  8. Universal Quantum Gate Set Approaching Fault-Tolerant Thresholds with Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Chow, Jerry M.; Gambetta, Jay M.; Córcoles, A. D.; Merkel, Seth T.; Smolin, John A.; Rigetti, Chad; Poletto, S.; Keefe, George A.; Rothwell, Mary B.; Rozen, J. R.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We use quantum process tomography to characterize a full universal set of all-microwave gates on two superconducting single-frequency single-junction transmon qubits. All extracted gate fidelities, including those for Clifford group generators, single-qubit π/4 and π/8 rotations, and a two-qubit controlled-not, exceed 95% (98%), without (with) subtracting state preparation and measurement errors. Furthermore, we introduce a process map representation in the Pauli basis which is visually efficient and informative. This high-fidelity gate set serves as a critical building block towards scalable architectures of superconducting qubits for error correction schemes and pushes up on the known limits of quantum gate characterization.

  9. Glyphosate fate in soils when arriving in plant residues.

    PubMed

    Mamy, Laure; Barriuso, Enrique; Gabrielle, Benoît

    2016-07-01

    A significant fraction of pesticides sprayed on crops may be returned to soils via plant residues, but its fate has been little documented. The objective of this work was to study the fate of glyphosate associated to plants residues. Oilseed rape was used as model plant using two lines: a glyphosate-tolerant (GT) line and a non-GT one, considered as a crucifer weed. The effects of different fragmentation degrees and placements in soil of plant residues were tested. A control was set up by spraying glyphosate directly on the soil. The mineralization of glyphosate in soil was slower when incorporated into plant residues, and the amounts of extractable and non-extractable glyphosate residues increased. Glyphosate availability for mineralization increased when the size of plant residues decreased, and as the distribution of plant residues in soil was more homogeneous. After 80 days of soil incubation, extractable (14)C-residues mostly involved one metabolite of glyphosate (AMPA) but up to 2.6% of initial (14)C was still extracted from undecayed leaves as glyphosate. Thus, the trapping of herbicides in plant materials provided a protection against degradation, and crops residues returns may increase the persistence of glyphosate in soils. This pattern appeared more pronounced for GT crops, which accumulated more non-degraded glyphosate in their tissues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Image guidance in prostate cancer - can offline corrections be an effective substitute for daily online imaging?

    PubMed

    Prasad, Devleena; Das, Pinaki; Saha, Niladri S; Chatterjee, Sanjoy; Achari, Rimpa; Mallick, Indranil

    2014-01-01

    This aim of this study was to determine if a less resource-intensive and established offline correction protocol - the No Action Level (NAL) protocol was as effective as daily online corrections of setup deviations in curative high-dose radiotherapy of prostate cancer. A total of 683 daily megavoltage CT (MVCT) or kilovoltage CT (kvCBCT) images of 30 patients with localized prostate cancer treated with intensity modulated radiotherapy were evaluated. Daily image-guidance was performed and setup errors in three translational axes recorded. The NAL protocol was simulated by using the mean shift calculated from the first five fractions and implemented on all subsequent treatments. Using the imaging data from the remaining fractions, the daily residual error (RE) was determined. The proportion of fractions where the RE was greater than 3,5 and 7 mm was calculated, and also the actual PTV margin that would be required if the offline protocol was followed. Using the NAL protocol reduced the systematic but not the random errors. Corrections made using the NAL protocol resulted in small and acceptable RE in the mediolateral (ML) and superoinferior (SI) directions with 46/533 (8.1%) and 48/533 (5%) residual shifts above 5 mm. However; residual errors greater than 5mm in the anteroposterior (AP) direction remained in 181/533 (34%) of fractions. The PTV margins calculated based on residual errors were 5mm, 5mm and 13 mm in the ML, SI and AP directions respectively. Offline correction using the NAL protocol resulted in unacceptably high residual errors in the AP direction, due to random uncertainties of rectal and bladder filling. Daily online imaging and corrections remain the standard image guidance policy for highly conformal radiotherapy of prostate cancer.

  11. Quantification of residual dose estimation error on log file-based patient dose calculation.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi

    2016-05-01

    The log file-based patient dose estimation includes a residual dose estimation error caused by leaf miscalibration, which cannot be reflected on the estimated dose. The purpose of this study is to determine this residual dose estimation error. Modified log files for seven head-and-neck and prostate volumetric modulated arc therapy (VMAT) plans simulating leaf miscalibration were generated by shifting both leaf banks (systematic leaf gap errors: ±2.0, ±1.0, and ±0.5mm in opposite directions and systematic leaf shifts: ±1.0mm in the same direction) using MATLAB-based (MathWorks, Natick, MA) in-house software. The generated modified and non-modified log files were imported back into the treatment planning system and recalculated. Subsequently, the generalized equivalent uniform dose (gEUD) was quantified for the definition of the planning target volume (PTV) and organs at risks. For MLC leaves calibrated within ±0.5mm, the quantified residual dose estimation errors that obtained from the slope of the linear regression of gEUD changes between non- and modified log file doses per leaf gap are in head-and-neck plans 1.32±0.27% and 0.82±0.17Gy for PTV and spinal cord, respectively, and in prostate plans 1.22±0.36%, 0.95±0.14Gy, and 0.45±0.08Gy for PTV, rectum, and bladder, respectively. In this work, we determine the residual dose estimation errors for VMAT delivery using the log file-based patient dose calculation according to the MLC calibration accuracy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  13. Exploring cosmic origins with CORE: Mitigation of systematic effects

    NASA Astrophysics Data System (ADS)

    Natoli, P.; Ashdown, M.; Banerji, R.; Borrill, J.; Buzzelli, A.; de Gasperis, G.; Delabrouille, J.; Hivon, E.; Molinari, D.; Patanchon, G.; Polastri, L.; Tomasi, M.; Bouchet, F. R.; Henrot-Versillé, S.; Hoang, D. T.; Keskitalo, R.; Kiiveri, K.; Kisner, T.; Lindholm, V.; McCarthy, D.; Piacentini, F.; Perdereau, O.; Polenta, G.; Tristram, M.; Achucarro, A.; Ade, P.; Allison, R.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Bartlett, J.; Bartolo, N.; Basak, S.; Baumann, D.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Cai, Z.-Y.; Calvo, M.; Carvalho, C.-S.; Castellano, M. G.; Challinor, A.; Chluba, J.; Clesse, S.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; de Bernardis, P.; De Zotti, G.; Di Valentino, E.; Diego, J.-M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Finelli, F.; Forastieri, F.; Galli, S.; Genova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Gruppuso, A.; Hagstotz, S.; Hanany, S.; Handley, W.; Hernandez-Monteagudo, C.; Hervías-Caimapo, C.; Hills, M.; Keihänen, E.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lattanzi, M.; Lesgourgues, J.; Lewis, A.; Liguori, M.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Mandolesi, N.; Martinez-González, E.; Martins, C. J. A. P.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Monfardini, A.; Negrello, M.; Notari, A.; Pagano, L.; Paiella, A.; Paoletti, D.; Piat, M.; Pisano, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rossi, G.; Rubino-Martin, J.-A.; Salvati, L.; Signorelli, G.; Tartari, A.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Valiviita, J.; Van de Weijgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.; Wallis, C.; Young, K.; Zannoni, M.

    2018-04-01

    We present an analysis of the main systematic effects that could impact the measurement of CMB polarization with the proposed CORE space mission. We employ timeline-to-map simulations to verify that the CORE instrumental set-up and scanning strategy allow us to measure sky polarization to a level of accuracy adequate to the mission science goals. We also show how the CORE observations can be processed to mitigate the level of contamination by potentially worrying systematics, including intensity-to-polarization leakage due to bandpass mismatch, asymmetric main beams, pointing errors and correlated noise. We use analysis techniques that are well validated on data from current missions such as Planck to demonstrate how the residual contamination of the measurements by these effects can be brought to a level low enough not to hamper the scientific capability of the mission, nor significantly increase the overall error budget. We also present a prototype of the CORE photometric calibration pipeline, based on that used for Planck, and discuss its robustness to systematics, showing how CORE can achieve its calibration requirements. While a fine-grained assessment of the impact of systematics requires a level of knowledge of the system that can only be achieved in a future study phase, the analysis presented here strongly suggests that the main areas of concern for the CORE mission can be addressed using existing knowledge, techniques and algorithms.

  14. Unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in the outpatient pharmacy.

    PubMed

    Nanji, Karen C; Rothschild, Jeffrey M; Boehne, Jennifer J; Keohane, Carol A; Ash, Joan S; Poon, Eric G

    2014-01-01

    Electronic prescribing systems have often been promoted as a tool for reducing medication errors and adverse drug events. Recent evidence has revealed that adoption of electronic prescribing systems can lead to unintended consequences such as the introduction of new errors. The purpose of this study is to identify and characterize the unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in an outpatient pharmacy. A multidisciplinary team conducted direct observations of workflow in an independent pharmacy and semi-structured interviews with pharmacy staff members about their perceptions of the unrealized potential and residual consequences of electronic prescribing systems. We used qualitative methods to iteratively analyze text data using a grounded theory approach, and derive a list of major themes and subthemes related to the unrealized potential and residual consequences of electronic prescribing. We identified the following five themes: Communication, workflow disruption, cost, technology, and opportunity for new errors. These contained 26 unique subthemes representing different facets of our observations and the pharmacy staff's perceptions of the unrealized potential and residual consequences of electronic prescribing. We offer targeted solutions to improve electronic prescribing systems by addressing the unrealized potential and residual consequences that we identified. These recommendations may be applied not only to improve staff perceptions of electronic prescribing systems but also to improve the design and/or selection of these systems in order to optimize communication and workflow within pharmacies while minimizing both cost and the potential for the introduction of new errors.

  15. Maintaining data integrity in a rural clinical trial.

    PubMed

    Van den Broeck, Jan; Mackay, Melanie; Mpontshane, Nontobeko; Kany Kany Luabeya, Angelique; Chhagan, Meera; Bennish, Michael L

    2007-01-01

    Clinical trials conducted in rural resource-poor settings face special challenges in ensuring quality of data collection and handling. The variable nature of these challenges, ways to overcome them, and the resulting data quality are rarely reported in the literature. To provide a detailed example of establishing local data handling capacity for a clinical trial conducted in a rural area, highlight challenges and solutions in establishing such capacity, and to report the data quality obtained by the trial. We provide a descriptive case study of a data system for biological samples and questionnaire data, and the problems encountered during its implementation. To determine the quality of data we analyzed test-retest studies using Kappa statistics of inter- and intra-observer agreement on categorical data. We calculated Technical Errors of Measurement of anthropometric measurements, audit trail analysis was done to assess error correction rates, and residual error rates were calculated by database-to-source document comparison. Initial difficulties included the unavailability of experienced research nurses, programmers and data managers in this rural area and the difficulty of designing new software tools and a complex database while making them error-free. National and international collaboration and external monitoring helped ensure good data handling and implementation of good clinical practice. Data collection, fieldwork supervision and query handling depended on streamlined transport over large distances. The involvement of a community advisory board was helpful in addressing cultural issues and establishing community acceptability of data collection methods. Data accessibility for safety monitoring required special attention. Kappa values and Technical Errors of Measurement showed acceptable values. Residual error rates in key variables were low. The article describes the experience of a single-site trial and does not address challenges particular to multi-site trials. Obtaining and maintaining data integrity in rural clinical trials is feasible, can result in acceptable data quality and can be used to develop capacity in developing country sites. It does, however, involve special challenges and requirements.

  16. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging.

    PubMed

    Hunter, Chad R R N; Klein, Ran; Beanlands, Rob S; deKemp, Robert A

    2016-04-01

    Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET-CT misalignment. A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET-CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.

  17. Predicting Real-Valued Protein Residue Fluctuation Using FlexPred.

    PubMed

    Peterson, Lenna; Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2017-01-01

    The conventional view of a protein structure as static provides only a limited picture. There is increasing evidence that protein dynamics are often vital to protein function including interaction with partners such as other proteins, nucleic acids, and small molecules. Considering flexibility is also important in applications such as computational protein docking and protein design. While residue flexibility is partially indicated by experimental measures such as the B-factor from X-ray crystallography and ensemble fluctuation from nuclear magnetic resonance (NMR) spectroscopy as well as computational molecular dynamics (MD) simulation, these techniques are resource-intensive. In this chapter, we describe the web server and stand-alone version of FlexPred, which rapidly predicts absolute per-residue fluctuation from a three-dimensional protein structure. On a set of 592 nonredundant structures, comparing the fluctuations predicted by FlexPred to the observed fluctuations in MD simulations showed an average correlation coefficient of 0.669 and an average root mean square error of 1.07 Å. FlexPred is available at http://kiharalab.org/flexPred/ .

  18. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  19. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  20. Research notes : monitoring water quality along highways.

    DOT National Transportation Integrated Search

    2006-12-01

    Runoff from highways typically picks up a variety of pollutants from the roadway. These pollutants include sediment, trash, residue from petroleum products, and heavy metals. Depending on the highway and its geographic setting, highway runoff can eva...

  1. Coupled molecular dynamics and continuum electrostatic method to compute the ionization pKa's of proteins as a function of pH. Test on a large set of proteins.

    PubMed

    Vorobjev, Yury N; Scheraga, Harold A; Vila, Jorge A

    2018-02-01

    A computational method, to predict the pKa values of the ionizable residues Asp, Glu, His, Tyr, and Lys of proteins, is presented here. Calculation of the electrostatic free-energy of the proteins is based on an efficient version of a continuum dielectric electrostatic model. The conformational flexibility of the protein is taken into account by carrying out molecular dynamics simulations of 10 ns in implicit water. The accuracy of the proposed method of calculation of pKa values is estimated from a test set of experimental pKa data for 297 ionizable residues from 34 proteins. The pKa-prediction test shows that, on average, 57, 86, and 95% of all predictions have an error lower than 0.5, 1.0, and 1.5 pKa units, respectively. This work contributes to our general understanding of the importance of protein flexibility for an accurate computation of pKa, providing critical insight about the significance of the multiple neutral states of acid and histidine residues for pKa-prediction, and may spur significant progress in our effort to develop a fast and accurate electrostatic-based method for pKa-predictions of proteins as a function of pH.

  2. Fast quantifying collision strength index of ethylene-vinyl acetate copolymer coverings on the fields based on near infrared hyperspectral imaging techniques

    PubMed Central

    Chen, Y. M.; Lin, P.; He, Y.; He, J. Q.; Zhang, J.; Li, X. L.

    2016-01-01

    A novel strategy based on the near infrared hyperspectral imaging techniques and chemometrics were explored for fast quantifying the collision strength index of ethylene-vinyl acetate copolymer (EVAC) coverings on the fields. The reflectance spectral data of EVAC coverings was obtained by using the near infrared hyperspectral meter. The collision analysis equipment was employed to measure the collision intensity of EVAC materials. The preprocessing algorithms were firstly performed before the calibration. The algorithms of random frog and successive projection (SP) were applied to extracting the fingerprint wavebands. A correlation model between the significant spectral curves which reflected the cross-linking attributions of the inner organic molecules and the degree of collision strength was set up by taking advantage of the support vector machine regression (SVMR) approach. The SP-SVMR model attained the residual predictive deviation of 3.074, the square of percentage of correlation coefficient of 93.48% and 93.05% and the root mean square error of 1.963 and 2.091 for the calibration and validation sets, respectively, which exhibited the best forecast performance. The results indicated that the approaches of integrating the near infrared hyperspectral imaging techniques with the chemometrics could be utilized to rapidly determine the degree of collision strength of EVAC. PMID:26875544

  3. Free Energy Perturbation Calculation of Relative Binding Free Energy between Broadly Neutralizing Antibodies and the gp120 Glycoprotein of HIV-1.

    PubMed

    Clark, Anthony J; Gindin, Tatyana; Zhang, Baoshan; Wang, Lingle; Abel, Robert; Murret, Colleen S; Xu, Fang; Bao, Amy; Lu, Nina J; Zhou, Tongqing; Kwong, Peter D; Shapiro, Lawrence; Honig, Barry; Friesner, Richard A

    2017-04-07

    Direct calculation of relative binding affinities between antibodies and antigens is a long-sought goal. However, despite substantial efforts, no generally applicable computational method has been described. Here, we describe a systematic free energy perturbation (FEP) protocol and calculate the binding affinities between the gp120 envelope glycoprotein of HIV-1 and three broadly neutralizing antibodies (bNAbs) of the VRC01 class. The protocol has been adapted from successful studies of small molecules to address the challenges associated with modeling protein-protein interactions. Specifically, we built homology models of the three antibody-gp120 complexes, extended the sampling times for large bulky residues, incorporated the modeling of glycans on the surface of gp120, and utilized continuum solvent-based loop prediction protocols to improve sampling. We present three experimental surface plasmon resonance data sets, in which antibody residues in the antibody/gp120 interface were systematically mutated to alanine. The RMS error in the large set (55 total cases) of FEP tests as compared to these experiments, 0.68kcal/mol, is near experimental accuracy, and it compares favorably with the results obtained from a simpler, empirical methodology. The correlation coefficient for the combined data set including residues with glycan contacts, R 2 =0.49, should be sufficient to guide the choice of residues for antibody optimization projects, assuming that this level of accuracy can be realized in prospective prediction. More generally, these results are encouraging with regard to the possibility of using an FEP approach to calculate the magnitude of protein-protein binding affinities. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Concurrent error detecting codes for arithmetic processors

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    A method of concurrent error detection for arithmetic processors is described. Low-cost residue codes with check-length l and checkbase m = 2 to the l power - 1 are described for checking arithmetic operations of addition, subtraction, multiplication, division complement, shift, and rotate. Of the three number representations, the signed-magnitude representation is preferred for residue checking. Two methods of residue generation are described: the standard method of using modulo m adders and the method of using a self-testing residue tree. A simple single-bit parity-check code is described for checking the logical operations of XOR, OR, and AND, and also the arithmetic operations of complement, shift, and rotate. For checking complement, shift, and rotate, the single-bit parity-check code is simpler to implement than the residue codes.

  5. Topography-modified refraction (TMR): adjustment of treated cylinder amount and axis to the topography versus standard clinical refraction in myopic topography-guided LASIK

    PubMed Central

    Kanellopoulos, Anastasios John

    2016-01-01

    Purpose To evaluate the safety, efficacy, and contralateral eye comparison of topography-guided myopic LASIK with two different refraction treatment strategies. Setting Private clinical ophthalmology practice. Patients and methods A total of 100 eyes (50 patients) in consecutive cases of myopic topography-guided LASIK procedures with the same refractive platform (FS200 femtosecond and EX500 excimer lasers) were randomized for treatment as follows: one eye with the standard clinical refraction (group A) and the contralateral eye with the topographic astigmatic power and axis (topography-modified treatment refraction; group B). All cases were evaluated pre- and post-operatively for the following parameters: refractive error, best corrected distance visual acuity (CDVA), uncorrected distance visual acuity (UDVA), topography (Placido-disk based) and tomography (Scheimpflug-image based), wavefront analysis, pupillometry, and contrast sensitivity. Follow-up visits were conducted for at least 12 months. Results Mean refractive error was −5.5 D of myopia and −1.75 D of astigmatism. In group A versus group B, respectively, the average UDVA improved from 20/200 to 20/20 versus 20/16; post-operative CDVA was 20/20 and 20/13.5; 1 line of vision gained was 27.8% and 55.6%; and 2 lines of vision gained was 5.6% and 11.1%. In group A, 27.8% of eyes had over −0.50 diopters of residual refractive astigmatism, in comparison to 11.7% in group B (P<0.01). The residual percentages in both groups were measured with refractive astigmatism of more than −0.5 diopters. Conclusion Topography-modified refraction (TMR): topographic adjustment of the amount and axis of astigmatism treated, when different from the clinical refraction, may offer superior outcomes in topography-guided myopic LASIK. These findings may change the current clinical paradigm of the optimal subjective refraction utilized in laser vision correction. PMID:27843292

  6. Mobile terrestrial light detection and ranging (T-LiDAR) survey of areas on Dauphin Island, Alabama, in the aftermath of Hurricane Isaac, 2012

    USGS Publications Warehouse

    Kimbrow, Dustin R.

    2014-01-01

    Topographic survey data of areas on Dauphin Island on the Alabama coast were collected using a truck-mounted mobile terrestrial light detection and ranging system. This system is composed of a high frequency laser scanner in conjunction with an inertial measurement unit and a position and orientation computer to produce highly accurate topographic datasets. A global positioning system base station was set up on a nearby benchmark and logged vertical and horizontal position information during the survey for post-processing. Survey control points were also collected throughout the study area to determine residual errors. Data were collected 5 days after Hurricane Isaac made landfall in early September 2012 to document sediment deposits prior to clean-up efforts. Three data files in ASCII text format with the extension .xyz are included in this report, and each file is named according to both the acquisition date and the relative geographic location on Dauphin Island (for example, 20120903_Central.xyz). Metadata are also included for each of the files in both Extensible Markup Language with the extension .xml and ASCII text formats. These topographic data can be used to analyze the effects of storm surge on barrier island environments and also serve as a baseline dataset for future change detection analyses.

  7. Mechanical Ventilation-Related Safety Incidents in General Care Wards and ICU Settings.

    PubMed

    Kamio, Tadashi; Masamune, Ken

    2018-05-29

    Although the ICU is the most appropriate place to care for mechanically ventilated patients, a considerable number are ventilated in general medical care wards all over the world. However, adverse events focusing on mechanically ventilated patients in general care have not been explored. Data from the Japan Council for Quality Health Care database were analyzed. Patient safety incidents from January 2010 to November 2017 regarding mechanical ventilation were collected, and comparisons of patient safety incidents between ICUs/high care units (HCUs) and general care wards were made. We identified 261 adverse events (with at least 20 adverse events resulting in death) and 702 near-miss events related to mechanical ventilation in Japan between 2010 and 2017. Furthermore, among all adverse events, 19% (49 of 261 events) caused serious harm (residual disability or death). Human-factor issues were most frequent in both ICU/HCU and general care settings (55% and 53%, respectively), while knowledge-based errors were higher in the general care setting. Human-factor issues were the most frequent reasons in both settings, while knowledge-based error rates were higher in general care. Our results suggest that proper education and training is needed to minimize patient safety incidents in facilities without respiratory therapists. Copyright © 2018 by Daedalus Enterprises.

  8. Iterative Overlap FDE for Multicode DS-CDMA

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuaki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual interchip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.

  9. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: quality assurance implications for target volume and organs‐at‐risk margination using daily CT on‐rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul

    2014-01-01

    Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr

  10. MABAL: a Novel Deep-Learning Architecture for Machine-Assisted Bone Age Labeling.

    PubMed

    Mutasa, Simukayi; Chang, Peter D; Ruzal-Shapiro, Carrie; Ayyala, Rama

    2018-02-05

    Bone age assessment (BAA) is a commonly performed diagnostic study in pediatric radiology to assess skeletal maturity. The most commonly utilized method for assessment of BAA is the Greulich and Pyle method (Pediatr Radiol 46.9:1269-1274, 2016; Arch Dis Child 81.2:172-173, 1999) atlas. The evaluation of BAA can be a tedious and time-consuming process for the radiologist. As such, several computer-assisted detection/diagnosis (CAD) methods have been proposed for automation of BAA. Classical CAD tools have traditionally relied on hard-coded algorithmic features for BAA which suffer from a variety of drawbacks. Recently, the advent and proliferation of convolutional neural networks (CNNs) has shown promise in a variety of medical imaging applications. There have been at least two published applications of using deep learning for evaluation of bone age (Med Image Anal 36:41-51, 2017; JDI 1-5, 2017). However, current implementations are limited by a combination of both architecture design and relatively small datasets. The purpose of this study is to demonstrate the benefits of a customized neural network algorithm carefully calibrated to the evaluation of bone age utilizing a relatively large institutional dataset. In doing so, this study will aim to show that advanced architectures can be successfully trained from scratch in the medical imaging domain and can generate results that outperform any existing proposed algorithm. The training data consisted of 10,289 images of different skeletal age examinations, 8909 from the hospital Picture Archiving and Communication System at our institution and 1383 from the public Digital Hand Atlas Database. The data was separated into four cohorts, one each for male and female children above the age of 8, and one each for male and female children below the age of 10. The testing set consisted of 20 radiographs of each 1-year-age cohort from 0 to 1 years to 14-15+ years, half male and half female. The testing set included left-hand radiographs done for bone age assessment, trauma evaluation without significant findings, and skeletal surveys. A 14 hidden layer-customized neural network was designed for this study. The network included several state of the art techniques including residual-style connections, inception layers, and spatial transformer layers. Data augmentation was applied to the network inputs to prevent overfitting. A linear regression output was utilized. Mean square error was used as the network loss function and mean absolute error (MAE) was utilized as the primary performance metric. MAE accuracies on the validation and test sets for young females were 0.654 and 0.561 respectively. For older females, validation and test accuracies were 0.662 and 0.497 respectively. For young males, validation and test accuracies were 0.649 and 0.585 respectively. Finally, for older males, validation and test set accuracies were 0.581 and 0.501 respectively. The female cohorts were trained for 900 epochs each and the male cohorts were trained for 600 epochs. An eightfold cross-validation set was employed for hyperparameter tuning. Test error was obtained after training on a full data set with the selected hyperparameters. Using our proposed customized neural network architecture on our large available data, we achieved an aggregate validation and test set mean absolute errors of 0.637 and 0.536 respectively. To date, this is the best published performance on utilizing deep learning for bone age assessment. Our results support our initial hypothesis that customized, purpose-built neural networks provide improved performance over networks derived from pre-trained imaging data sets. We build on that initial work by showing that the addition of state-of-the-art techniques such as residual connections and inception architecture further improves prediction accuracy. This is important because the current assumption for use of residual and/or inception architectures is that a large pre-trained network is required for successful implementation given the relatively small datasets in medical imaging. Instead we show that a small, customized architecture incorporating advanced CNN strategies can indeed be trained from scratch, yielding significant improvements in algorithm accuracy. It should be noted that for all four cohorts, testing error outperformed validation error. One reason for this is that our ground truth for our test set was obtained by averaging two pediatric radiologist reads compared to our training data for which only a single read was used. This suggests that despite relatively noisy training data, the algorithm could successfully model the variation between observers and generate estimates that are close to the expected ground truth.

  11. Paediatric Refractive Errors in an Eye Clinic in Osogbo, Nigeria.

    PubMed

    Michaeline, Isawumi; Sheriff, Agboola; Bimbo, Ayegoro

    2016-03-01

    Paediatric ophthalmology is an emerging subspecialty in Nigeria and as such there is paucity of data on refractive errors in the country. This study set out to determine the pattern of refractive errors in children attending an eye clinic in South West Nigeria. A descriptive study of 180 consecutive subjects seen over a 2-year period. Presenting complaints, presenting visual acuity (PVA), age and sex were recorded. Clinical examination of the anterior and posterior segments of the eyes, extraocular muscle assessment and refraction were done. The types of refractive errors and their grades were determined. Corrected VA was obtained. Data was analysed using descriptive statistics in proportions, chi square with p value <0.05. The age range of subjects was between 3 and 16 years with mean age = 11.7 and SD = 0.51; with males making up 33.9%.The commonest presenting complaint was blurring of distant vision (40%), presenting visual acuity 6/9 (33.9%), normal vision constituted >75.0%, visual impairment20% and low vision 23.3%. Low grade spherical and cylindrical errors occurred most frequently (35.6% and 59.9% respectively). Regular astigmatism was significantly more common, P <0.001. The commonest diagnosis was simple myopic astigmatism (41.1%). Four cases of strabismus were seen. Simple spherical and cylindrical errors were the commonest types of refractive errors seen. Visual impairment and low vision occurred and could be a cause of absenteeism from school. Low-cost spectacle production or dispensing unit and health education are advocated for the prevention of visual impairment in a hospital set-up.

  12. Epinephrine Auto-Injector Versus Drawn Up Epinephrine for Anaphylaxis Management: A Scoping Review.

    PubMed

    Chime, Nnenna O; Riese, Victoria G; Scherzer, Daniel J; Perretta, Julianne S; McNamara, LeAnn; Rosen, Michael A; Hunt, Elizabeth A

    2017-08-01

    Anaphylaxis is a life-threatening event. Most clinical symptoms of anaphylaxis can be reversed by prompt intramuscular administration of epinephrine using an auto-injector or epinephrine drawn up in a syringe and delays and errors may be fatal. The aim of this scoping review is to identify and compare errors associated with use of epinephrine drawn up in a syringe versus epinephrine auto-injectors in order to assist hospitals as they choose which approach minimizes risk of adverse events for their patients. PubMed, Embase, CINAHL, Web of Science, and the Cochrane Library were searched using terms agreed to a priori. We reviewed human and simulation studies reporting errors associated with the use of epinephrine in anaphylaxis. There were multiple screening stages with evolving feedback. Each study was independently assessed by two reviewers for eligibility. Data were extracted using an instrument modeled from the Zaza et al instrument and grouped into themes. Three main themes were noted: 1) ergonomics, 2) dosing errors, and 3) errors due to route of administration. Significant knowledge gaps in the operation of epinephrine auto-injectors among healthcare providers, patients, and caregivers were identified. For epinephrine in a syringe, there were more frequent reports of incorrect dosing and erroneous IV administration with associated adverse cardiac events. For the epinephrine auto-injector, unintentional administration to the digit was an error reported on multiple occasions. This scoping review highlights knowledge gaps and a diverse set of errors regardless of the approach to epinephrine preparation during management of anaphylaxis. There are more potentially life-threatening errors reported for epinephrine drawn up in a syringe than with the auto-injectors. The impact of these knowledge gaps and potentially fatal errors on patient outcomes, cost, and quality of care is worthy of further investigation.

  13. Marker-based method to measure movement between the residual limb and a transtibial prosthetic socket.

    PubMed

    Childers, Walter Lee; Siebert, Steven

    2016-12-01

    Limb movement between the residuum and socket continues to be an underlying factor in limb health, prosthetic comfort, and gait performance yet techniques to measure this have been underdeveloped. Develop a method to measure motion between the residual limb and a transtibial prosthetic socket. Single subject, repeated measures with mathematical modeling. The gait of a participant with transtibial amputation was recorded using a motion capture system using a marker set that included arrays on the anterior distal tibia and the lateral epicondyle of the femur. The proximal or distal translation, anterior or posterior translation, and angular movements were quantified. A random Monte Carlo simulation based on the precision of the motion capture system and a model of the bone moving under the skin explored the technique's accuracy. Residual limb tissue stiffness was modeled as a linear spring based on data from Papaioannou et al. Residuum movement relative to the socket went through ~30 mm, 18 mm, and 15° range of motion. Root mean squared errors were 5.47 mm, 1.86 mm, and 0.75° when considering the modeled bone-skin movement in the proximal or distal, anterior or posterior, and angular directions, respectively. The measured movement was greater than the root mean squared error, indicating that this method can measure motion between the residuum and socket. The ability to quantify movement between the residual limb and the prosthetic socket will improve prosthetic treatment through the evaluation of different prosthetic suspensions, socket designs, and motor control of the prosthetic interface. © The International Society for Prosthetics and Orthotics 2015.

  14. Tropospheric nitrogen dioxide column retrieval from ground-based zenith-sky DOAS observations

    NASA Astrophysics Data System (ADS)

    Tack, F.; Hendrick, F.; Goutail, F.; Fayt, C.; Merlaud, A.; Pinardi, G.; Hermans, C.; Pommereau, J.-P.; Van Roozendael, M.

    2015-06-01

    We present an algorithm for retrieving tropospheric nitrogen dioxide (NO2) vertical column densities (VCDs) from ground-based zenith-sky (ZS) measurements of scattered sunlight. The method is based on a four-step approach consisting of (1) the differential optical absorption spectroscopy (DOAS) analysis of ZS radiance spectra using a fixed reference spectrum corresponding to low NO2 absorption, (2) the determination of the residual amount in the reference spectrum using a Langley-plot-type method, (3) the removal of the stratospheric content from the daytime total measured slant column based on stratospheric VCDs measured at sunrise and sunset, and simulation of the rapid NO2 diurnal variation, (4) the retrieval of tropospheric VCDs by dividing the resulting tropospheric slant columns by appropriate air mass factors (AMFs). These steps are fully characterized and recommendations are given for each of them. The retrieval algorithm is applied on a ZS data set acquired with a multi-axis (MAX-) DOAS instrument during the Cabauw (51.97° N, 4.93° E, sea level) Intercomparison campaign for Nitrogen Dioxide measuring Instruments (CINDI) held from 10 June to 21 July 2009 in the Netherlands. A median value of 7.9 × 1015 molec cm-2 is found for the retrieved tropospheric NO2 VCDs, with maxima up to 6.0 × 1016 molec cm-2. The error budget assessment indicates that the overall error σTVCD on the column values is less than 28%. In the case of low tropospheric contribution, σTVCD is estimated to be around 39% and is dominated by uncertainties in the determination of the residual amount in the reference spectrum. For strong tropospheric pollution events, σTVCD drops to approximately 22% with the largest uncertainties on the determination of the stratospheric NO2 abundance and tropospheric AMFs. The tropospheric VCD amounts derived from ZS observations are compared to VCDs retrieved from off-axis and direct-sun measurements of the same MAX-DOAS instrument as well as to data from a co-located Système d'Analyse par Observations Zénithales (SAOZ) spectrometer. The retrieved tropospheric VCDs are in good agreement with the different data sets with correlation coefficients and slopes close to or larger than 0.9. The potential of the presented ZS retrieval algorithm is further demonstrated by its successful application on a 2-year data set, acquired at the NDACC (Network for the Detection of Atmospheric Composition Change) station Observatoire de Haute Provence (OHP; Southern France).

  15. Comparison of community and hospital pharmacists' attitudes and behaviors on medication error disclosure to the patient: A pilot study.

    PubMed

    Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C

    To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors depending on their particular practice setting. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  16. The interaction of the flux errors and transport errors in modeled atmospheric carbon dioxide concentrations

    NASA Astrophysics Data System (ADS)

    Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.

    2017-12-01

    Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter is mainly caused by the large-scale transport associated with the jet stream that carries the negative biogenic CO2 signals to the northeastern coast. We apply comprehensive statistics to eliminate outliers. We generate a set of flux perturbations based on pre-calibrated flux ensemble members and apply them to the simulations.

  17. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    PubMed

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  18. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  19. Improved prediction of hardwood tree biomass derived from wood density estimates and form factors for whole trees

    Treesearch

    David W. MacFarlane; Neil R. Ver Planck

    2012-01-01

    Data from hardwood trees in Michigan were analyzed to investigate how differences in whole-tree form and wood density between trees of different stem diameter relate to residual error in standard-type biomass equations. The results suggested that whole-tree wood density, measured at breast height, explained a significant proportion of residual error in standard-type...

  20. Three-dimensional analysis of the surface registration accuracy of electromagnetic navigation systems in live endoscopic sinus surgery.

    PubMed

    Chang, C M; Fang, K M; Huang, T W; Wang, C T; Cheng, P W

    2013-12-01

    Studies on the performance of surface registration with electromagnetic tracking systems are lacking in both live surgery and the laboratory setting. This study presents the efficiency in time of the system preparation as well as the navigational accuracy of surface registration using electromagnetic tracking systems. Forty patients with bilateral chronic paranasal pansinusitis underwent endoscopic sinus surgery after undergoing sinus computed tomography scans. The surgeries were performed under electromagnetic navigation guidance after the surface registration had been carried out on all of the patients. The intraoperative measurements indicate the time taken for equipment set-up, surface registration and surgical procedure, as well as the degree of navigation error along 3 axes. The time taken for equipment set-up, surface registration and the surgical procedure was 179 +- 23 seconds, 39 +- 4.8 seconds and 114 +- 36 minutes, respectively. A comparison of the navigation error along the 3 axes showed that the deviation in the medial-lateral direction was significantly less than that in the anterior-posterior and cranial-caudal directions. The procedures of equipment set-up and surface registration in electromagnetic navigation tracking are efficient, convenient and easy to manipulate. The system accuracy is within the acceptable ranges, especially on the medial-lateral axis.

  1. Revision of earthquake hypocenter locations in GEOFON bulletin data using global source-specific station terms technique

    NASA Astrophysics Data System (ADS)

    Nooshiri, N.; Saul, J.; Heimann, S.; Tilmann, F. J.; Dahm, T.

    2015-12-01

    The use of a 1D velocity model for seismic event location is often associated with significant travel-time residuals. Particularly for regional stations in subduction zones, where the velocity structure strongly deviates from the assumed 1D model, residuals of up to ±10 seconds are observed even for clear arrivals, which leads to strongly biased locations. In fact, due to mostly regional travel-time anomalies, arrival times at regional stations do not match the location obtained with teleseismic picks, and vice versa. If the earthquake is weak and only recorded regionally, or if fast locations based on regional stations are needed, the location may be far off the corresponding teleseismic location. In this case, implementation of travel-time corrections may leads to a reduction of the travel-time residuals at regional stations and, in consequence, significantly improve the relative location accuracy. Here, we have extended the source-specific station terms (SSST) technique to regional and teleseismic distances and adopted the algorithm for probabilistic, non-linear, global-search earthquake location. The method has been applied to specific test regions using P and pP phases from the GEOFON bulletin data for all available station networks. By using this method, a set of timing corrections has been calculated for each station varying as a function of source position. In this way, an attempt is made to correct for the systematic errors, introduced by limitations and inaccuracies in the assumed velocity structure, without solving for a new earth model itself. In this presentation, we draw on examples of the application of this global SSST technique to relocate earthquakes from the Tonga-Fiji subduction zone and from the Chilean margin. Our results have been showing a considerable decrease of the root-mean-square (RMS) residual in earthquake location final catalogs, a major reduction of the median absolute deviation (MAD) of the travel-time residuals at regional stations and sharper images of the seismicity compared to the initial locations.

  2. [Detection and classification of medication errors at Joan XXIII University Hospital].

    PubMed

    Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J

    2004-01-01

    Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.

  3. [Spatial interpolation of soil organic matter using regression Kriging and geographically weighted regression Kriging].

    PubMed

    Yang, Shun-hua; Zhang, Hai-tao; Guo, Long; Ren, Yan

    2015-06-01

    Relative elevation and stream power index were selected as auxiliary variables based on correlation analysis for mapping soil organic matter. Geographically weighted regression Kriging (GWRK) and regression Kriging (RK) were used for spatial interpolation of soil organic matter and compared with ordinary Kriging (OK), which acts as a control. The results indicated that soil or- ganic matter was significantly positively correlated with relative elevation whilst it had a significantly negative correlation with stream power index. Semivariance analysis showed that both soil organic matter content and its residuals (including ordinary least square regression residual and GWR resi- dual) had strong spatial autocorrelation. Interpolation accuracies by different methods were esti- mated based on a data set of 98 validation samples. Results showed that the mean error (ME), mean absolute error (MAE) and root mean square error (RMSE) of RK were respectively 39.2%, 17.7% and 20.6% lower than the corresponding values of OK, with a relative-improvement (RI) of 20.63. GWRK showed a similar tendency, having its ME, MAE and RMSE to be respectively 60.6%, 23.7% and 27.6% lower than those of OK, with a RI of 59.79. Therefore, both RK and GWRK significantly improved the accuracy of OK interpolation of soil organic matter due to their in- corporation of auxiliary variables. In addition, GWRK performed obviously better than RK did in this study, and its improved performance should be attributed to the consideration of sample spatial locations.

  4. Investigations of interpolation errors of angle encoders for high precision angle metrology

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Just, Andreas; Krause, Michael; Asli Akgoz, S.; Aksulu, Murat; Grubert, Bernd; Watanabe, Tsukasa

    2018-06-01

    Interpolation errors at small angular scales are caused by the subdivision of the angular interval between adjacent grating lines into smaller intervals when radial gratings are used in angle encoders. They are often a major error source in precision angle metrology and better approaches for determining them at low levels of uncertainty are needed. Extensive investigations of interpolation errors of different angle encoders with various interpolators and interpolation schemes were carried out by adapting the shearing method to the calibration of autocollimators with angle encoders. The results of the laboratories with advanced angle metrology capabilities are presented which were acquired by the use of four different high precision angle encoders/interpolators/rotary tables. State of the art uncertainties down to 1 milliarcsec (5 nrad) were achieved for the determination of the interpolation errors using the shearing method which provides simultaneous access to the angle deviations of the autocollimator and of the angle encoder. Compared to the calibration and measurement capabilities (CMC) of the participants for autocollimators, the use of the shearing technique represents a substantial improvement in the uncertainty by a factor of up to 5 in addition to the precise determination of interpolation errors or their residuals (when compensated). A discussion of the results is carried out in conjunction with the equipment used.

  5. Residual-based Methods for Controlling Discretization Error in CFD

    DTIC Science & Technology

    2015-08-24

    discrete equations uh into Equation (3), then subtracting the original (continuous) governing equation 0)~( uL gives 0)()~()(  hhh uuLuL  . If...error from Equation (1) results in )()( hhh uL   (4) which for Burgers’ equation becomes  4 2 4 42 3 3 2 2 126 xO x dx udx dx ud u dx d dx d u...GTEE given in Equation (3) gives the continuous residual )()( hhh uuL  (8) which is analogous to the finite element residual (Ainsworth and

  6. On the use of inexact, pruned hardware in atmospheric modelling

    PubMed Central

    Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.

    2014-01-01

    Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031

  7. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  8. Survival analysis with error-prone time-varying covariates: a risk set calibration approach

    PubMed Central

    Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna

    2010-01-01

    Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928

  9. Sub-basin-scale sea level budgets from satellite altimetry, Argo floats and satellite gravimetry: a case study in the North Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Kleinherenbrink, Marcel; Riva, Riccardo; Sun, Yu

    2016-11-01

    In this study, for the first time, an attempt is made to close the sea level budget on a sub-basin scale in terms of trend and amplitude of the annual cycle. We also compare the residual time series after removing the trend, the semiannual and the annual signals. To obtain errors for altimetry and Argo, full variance-covariance matrices are computed using correlation functions and their errors are fully propagated. For altimetry, we apply a geographically dependent intermission bias [Ablain et al.(2015)], which leads to differences in trends up to 0.8 mm yr-1. Since Argo float measurements are non-homogeneously spaced, steric sea levels are first objectively interpolated onto a grid before averaging. For the Gravity Recovery And Climate Experiment (GRACE), gravity fields full variance-covariance matrices are used to propagate errors and statistically filter the gravity fields. We use four different filtered gravity field solutions and determine which post-processing strategy is best for budget closure. As a reference, the standard 96 degree Dense Decorrelation Kernel-5 (DDK5)-filtered Center for Space Research (CSR) solution is used to compute the mass component (MC). A comparison is made with two anisotropic Wiener-filtered CSR solutions up to degree and order 60 and 96 and a Wiener-filtered 90 degree ITSG solution. Budgets are computed for 10 polygons in the North Atlantic Ocean, defined in a way that the error on the trend of the MC plus steric sea level remains within 1 mm yr-1. Using the anisotropic Wiener filter on CSR gravity fields expanded up to spherical harmonic degree 96, it is possible to close the sea level budget in 9 of 10 sub-basins in terms of trend. Wiener-filtered Institute of Theoretical geodesy and Satellite Geodesy (ITSG) and the standard DDK5-filtered CSR solutions also close the trend budget if a glacial isostatic adjustment (GIA) correction error of 10-20 % is applied; however, the performance of the DDK5-filtered solution strongly depends on the orientation of the polygon due to residual striping. In 7 of 10 sub-basins, the budget of the annual cycle is closed, using the DDK5-filtered CSR or the Wiener-filtered ITSG solutions. The Wiener-filtered 60 and 96 degree CSR solutions, in combination with Argo, lack amplitude and suffer from what appears to be hydrological leakage in the Amazon and Sahel regions. After reducing the trend, the semiannual and the annual signals, 24-53 % of the residual variance in altimetry-derived sea level time series is explained by the combination of Argo steric sea levels and the Wiener-filtered ITSG MC. Based on this, we believe that the best overall solution for the MC of the sub-basin-scale budgets is the Wiener-filtered ITSG gravity fields. The interannual variability is primarily a steric signal in the North Atlantic Ocean, so for this the choice of filter and gravity field solution is not really significant.

  10. CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  11. Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution

    NASA Astrophysics Data System (ADS)

    Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.

    2012-07-01

    We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.

  12. Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria

    2015-11-01

    The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.

  13. Calibration of misalignment errors in the non-null interferometry based on reverse iteration optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xinmu; Hao, Qun; Hu, Yao; Wang, Shaopu; Ning, Yan; Li, Tengfei; Chen, Shufen

    2017-10-01

    With no necessity of compensating the whole aberration introduced by the aspheric surfaces, non-null test has the advantage over null test in applicability. However, retrace error, which is brought by the path difference between the rays reflected from the surface under test (SUT) and the incident rays, is introduced into the measurement and makes up of the residual wavefront aberrations (RWAs) along with surface figure error (SFE), misalignment error and other influences. Being difficult to separate from RWAs, the misalignment error may remain after measurement and it is hard to identify whether it is removed or not. It is a primary task to study the removal of misalignment error. A brief demonstration of digital Moiré interferometric technique is presented and a calibration method for misalignment error on the basis of reverse iteration optimization (RIO) algorithm in non-null test method is addressed. The proposed method operates mostly in the virtual system, and requires no accurate adjustment in the real interferometer, which is of significant advantage in reducing the errors brought by repeating complicated manual adjustment, furthermore improving the accuracy of the aspheric surface test. Simulation verification is done in this paper. The calibration accuracy of the position and attitude can achieve at least a magnitude of 10-5 mm and 0.0056×10-6rad, respectively. The simulation demonstrates that the influence of misalignment error can be precisely calculated and removed after calibration.

  14. PSF reconstruction validated using on-sky CANARY data in MOAO mode

    NASA Astrophysics Data System (ADS)

    Martin, O. A.; Correia, C. M.; Gendron, E.; Rousset, G.; Gratadour, D.; Vidal, F.; Morris, T. J.; Basden, A. G.; Myers, R. M.; Neichel, B.; Fusco, T.

    2016-07-01

    CANARY is an open-loop tomographic adaptive optics (AO) demonstrator that was designed for use at the 4.2m William Herschel Telescope (WHT) in La Palma. Gearing up to extensive statistical studies of high redshifted galaxies surveyed with Multi-Object Spectrographs (MOS), the demonstrator CANARY has been designed to tackle technical challenges related to open-loop Adaptive-Optics (AO) control with mixed Natural Guide Star (NGS) and Laser Guide Star (LGS) tomography. We have developed a Point Spread Function (PSF)-Reconstruction algorithm dedicated to MOAO systems using system telemetry to estimate the PSF potentially anywhere in the observed field, a prerequisite to deconvolve AO-corrected science observations in Integral Field Spectroscopy (IFS). Additionally the ability to accurately reconstruct the PSF is the materialization of the broad and fine-detailed understanding of the residual error contributors, both atmospheric and opto-mechanical. In this paper we compare the classical PSF-r approach from Véran (1) that we take as reference on-axis using the truth-sensor telemetry to one tailored to atmospheric tomography by handling the off-axis data only. We've post-processed over 450 on-sky CANARY data sets with which we observe 92% and 88% of correlation on respectively the reconstructed Strehl Ratio (SR)/Full Width at Half Maximum (FWHM) compared to the sky values. The reference method achieves 95% and 92.5% exploiting directly the measurements of the residual phase from the Canary Truth Sensor (TS).

  15. Clinical Outcomes of an Optimized Prolate Ablation Procedure for Correcting Residual Refractive Errors Following Laser Surgery.

    PubMed

    Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im

    2017-02-01

    The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.

  16. Integrity modelling of tropospheric delay models

    NASA Astrophysics Data System (ADS)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual error of tropospheric delays, the mathematical formulation of the overbounding models are currently under development. This study introduces the main findings of the residual error analysis of the studied tropospheric delay models, and discusses the preliminary analysis of the integrity model development for safety-of-life applications.

  17. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  18. Self-paced preparation for a task switch eliminates attentional inertia but not the performance switch cost.

    PubMed

    Longman, Cai S; Lavric, Aureliu; Monsell, Stephen

    2017-06-01

    The performance overhead associated with changing tasks (the "switch cost") usually diminishes when the task is specified in advance but is rarely eliminated by preparation. A popular account of the "residual" (asymptotic) switch cost is that it reflects "task-set inertia": carry-over of task-set parameters from the preceding trial(s). New evidence for a component of "task-set inertia" comes from eye-tracking, where the location associated with the previously (but no longer) relevant task is fixated preferentially over other irrelevant locations, even when preparation intervals are generous. Might such limits in overcoming task-set inertia in general, and "attentional inertia" in particular, result from suboptimal scheduling of preparation when the time available is outside one's control? In the present study, the stimulus comprised 3 digits located at the points of an invisible triangle, preceded by a central verbal cue specifying which of 3 classification tasks to perform, each consistently applied to just 1 digit location. The digits were presented only when fixation moved away from the cue, thus giving the participant control over preparation time. In contrast to our previous research with experimenter-determined preparation intervals, we found no sign of attentional inertia for the long preparation intervals. Self-paced preparation reduced but did not eliminate the performance switch cost-leaving a clear residual component in both reaction time and error rates. That the scheduling of preparation accounts for some, but not all, components of the residual switch cost, challenges existing accounts of the switch cost, even those which distinguish between preparatory and poststimulus reconfiguration processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Approximate Bayesian Computation Using Markov Chain Monte Carlo Simulation: Theory, Concepts, and Applications

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Vrugt, J. A.

    2013-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at increasingly finer spatial and temporal scales. Reconciling these system models with field and remote sensing data is a difficult task, particularly because average measures of model/data similarity inherently lack the power to provide a meaningful comparative evaluation of the consistency in model form and function. The very construction of the likelihood function - as a summary variable of the (usually averaged) properties of the error residuals - dilutes and mixes the available information into an index having little remaining correspondence to specific behaviors of the system (Gupta et al., 2008). The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh [2013] to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a much stronger and compelling diagnostic power than some aggregated measure of the size of the error residuals. Here, we will introduce an efficient ABC sampling method that is orders of magnitude faster in exploring the posterior parameter distribution than commonly used rejection and Population Monte Carlo (PMC) samplers. Our methodology uses Markov Chain Monte Carlo simulation with DREAM, and takes advantage of a simple computational trick to resolve discontinuity problems with the application of set-theoretic summary statistics. We will also demonstrate a set of summary statistics that are rather insensitive to errors in the forcing data. This enhances prospects of detecting model structural deficiencies.

  20. On High-Frequency Topography-Implied Gravity Signals for a Height System Unification Using GOCE-Based Global Geopotential Models

    NASA Astrophysics Data System (ADS)

    Grombein, Thomas; Seitz, Kurt; Heck, Bernhard

    2017-03-01

    National height reference systems have conventionally been linked to the local mean sea level, observed at individual tide gauges. Due to variations in the sea surface topography, the reference levels of these systems are inconsistent, causing height datum offsets of up to ±1-2 m. For the unification of height systems, a satellite-based method is presented that utilizes global geopotential models (GGMs) derived from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE). In this context, height datum offsets are estimated within a least squares adjustment by comparing the GGM information with measured GNSS/leveling data. While the GNSS/leveling data comprises the full spectral information, GOCE GGMs are restricted to long wavelengths according to the maximum degree of their spherical harmonic representation. To provide accurate height datum offsets, it is indispensable to account for the remaining signal above this maximum degree, known as the omission error of the GGM. Therefore, a combination of the GOCE information with the high-resolution Earth Gravitational Model 2008 (EGM2008) is performed. The main contribution of this paper is to analyze the benefit, when high-frequency topography-implied gravity signals are additionally used to reduce the remaining omission error of EGM2008. In terms of a spectral extension, a new method is proposed that does not rely on an assumed spectral consistency of topographic heights and implied gravity as is the case for the residual terrain modeling (RTM) technique. In the first step of this new approach, gravity forward modeling based on tesseroid mass bodies is performed according to the Rock-Water-Ice (RWI) approach. In a second step, the resulting full spectral RWI-based topographic potential values are reduced by the effect of the topographic gravity field model RWI_TOPO_2015, thus, removing the long to medium wavelengths. By using the latest GOCE GGMs, the impact of topography-implied gravity signals on the estimation of height datum offsets is analyzed in detail for representative GNSS/leveling data sets in Germany, Austria, and Brazil. Besides considerable changes in the estimated offset of up to 3 cm, the conducted analyses show that significant improvements of 30-40% can be achieved in terms of a reduced standard deviation and range of the least squares adjusted residuals.

  1. Algorithm for computing descriptive statistics for very large data sets and the exa-scale era

    NASA Astrophysics Data System (ADS)

    Beekman, Izaak

    2017-11-01

    An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.

  2. Optimization of a solid-state electron spin qubit using Gate Set Tomography

    DOE PAGES

    Dehollain, Juan P.; Muhonen, Juha T.; Blume-Kohout, Robin J.; ...

    2016-10-13

    Here, state of the art qubit systems are reaching the gate fidelities required for scalable quantum computation architectures. Further improvements in the fidelity of quantum gates demands characterization and benchmarking protocols that are efficient, reliable and extremely accurate. Ideally, a benchmarking protocol should also provide information on how to rectify residual errors. Gate Set Tomography (GST) is one such protocol designed to give detailed characterization of as-built qubits. We implemented GST on a high-fidelity electron-spin qubit confined by a single 31P atom in 28Si. The results reveal systematic errors that a randomized benchmarking analysis could measure but not identify, whereasmore » GST indicated the need for improved calibration of the length of the control pulses. After introducing this modification, we measured a new benchmark average gate fidelity of 99.942(8)%, an improvement on the previous value of 99.90(2)%. Furthermore, GST revealed high levels of non-Markovian noise in the system, which will need to be understood and addressed when the qubit is used within a fault-tolerant quantum computation scheme.« less

  3. PDB file parser and structure class implemented in Python.

    PubMed

    Hamelryck, Thomas; Manderick, Bernard

    2003-11-22

    The biopython project provides a set of bioinformatics tools implemented in Python. Recently, biopython was extended with a set of modules that deal with macromolecular structure. Biopython now contains a parser for PDB files that makes the atomic information available in an easy-to-use but powerful data structure. The parser and data structure deal with features that are often left out or handled inadequately by other packages, e.g. atom and residue disorder (if point mutants are present in the crystal), anisotropic B factors, multiple models and insertion codes. In addition, the parser performs some sanity checking to detect obvious errors. The Biopython distribution (including source code and documentation) is freely available (under the Biopython license) from http://www.biopython.org

  4. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  5. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, S; Hong, C; Kim, M

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less

  6. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.; Niethammer, U.

    2017-03-01

    Structure-from-motion (SfM) algorithms greatly facilitate the production of detailed topographic models from photographs collected using unmanned aerial vehicles (UAVs). However, the survey quality achieved in published geomorphological studies is highly variable, and sufficient processing details are never provided to understand fully the causes of variability. To address this, we show how survey quality and consistency can be improved through a deeper consideration of the underlying photogrammetric methods. We demonstrate the sensitivity of digital elevation models (DEMs) to processing settings that have not been discussed in the geomorphological literature, yet are a critical part of survey georeferencing, and are responsible for balancing the contributions of tie and control points. We provide a Monte Carlo approach to enable geomorphologists to (1) carefully consider sources of survey error and hence increase the accuracy of SfM-based DEMs and (2) minimise the associated field effort by robust determination of suitable lower-density deployments of ground control. By identifying appropriate processing settings and highlighting photogrammetric issues such as over-parameterisation during camera self-calibration, processing artefacts are reduced and the spatial variability of error minimised. We demonstrate such DEM improvements with a commonly-used SfM-based software (PhotoScan), which we augment with semi-automated and automated identification of ground control points (GCPs) in images, and apply to two contrasting case studies - an erosion gully survey (Taroudant, Morocco) and an active landslide survey (Super-Sauze, France). In the gully survey, refined processing settings eliminated step-like artefacts of up to 50 mm in amplitude, and overall DEM variability with GCP selection improved from 37 to 16 mm. In the much more challenging landslide case study, our processing halved planimetric error to 0.1 m, effectively doubling the frequency at which changes in landslide velocity could be detected. In both case studies, the Monte Carlo approach provided a robust demonstration that field effort could by substantially reduced by only deploying approximately half the number of GCPs, with minimal effect on the survey quality. To reduce processing artefacts and promote confidence in SfM-based geomorphological surveys, published results should include processing details which include the image residuals for both tie points and GCPs, and ensure that these are considered appropriately within the workflow.

  7. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Sen; Li, Guangjun; Wang, Maojie

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less

  8. Detecting and quantifying stellar magnetic fields. Sparse Stokes profile approximation using orthogonal matching pursuit

    NASA Astrophysics Data System (ADS)

    Carroll, T. A.; Strassmeier, K. G.

    2014-03-01

    Context. In recent years, we have seen a rapidly growing number of stellar magnetic field detections for various types of stars. Many of these magnetic fields are estimated from spectropolarimetric observations (Stokes V) by using the so-called center-of-gravity (COG) method. Unfortunately, the accuracy of this method rapidly deteriorates with increasing noise and thus calls for a more robust procedure that combines signal detection and field estimation. Aims: We introduce an estimation method that provides not only the effective or mean longitudinal magnetic field from an observed Stokes V profile but also uses the net absolute polarization of the profile to obtain an estimate of the apparent (i.e., velocity resolved) absolute longitudinal magnetic field. Methods: By combining the COG method with an orthogonal-matching-pursuit (OMP) approach, we were able to decompose observed Stokes profiles with an overcomplete dictionary of wavelet-basis functions to reliably reconstruct the observed Stokes profiles in the presence of noise. The elementary wave functions of the sparse reconstruction process were utilized to estimate the effective longitudinal magnetic field and the apparent absolute longitudinal magnetic field. A multiresolution analysis complements the OMP algorithm to provide a robust detection and estimation method. Results: An extensive Monte-Carlo simulation confirms the reliability and accuracy of the magnetic OMP approach where a mean error of under 2% is found. Its full potential is obtained for heavily noise-corrupted Stokes profiles with signal-to-noise variance ratios down to unity. In this case a conventional COG method yields a mean error for the effective longitudinal magnetic field of up to 50%, whereas the OMP method gives a maximum error of 18%. It is, moreover, shown that even in the case of very small residual noise on a level between 10-3 and 10-5, a regime reached by current multiline reconstruction techniques, the conventional COG method incorrectly interprets a large portion of the residual noise as a magnetic field, with values of up to 100 G. The magnetic OMP method, on the other hand, remains largely unaffected by the noise, regardless of the noise level the maximum error is no greater than 0.7 G.

  9. What is the best method to fit time-resolved data? A comparison of the residual minimization and the maximum likelihood techniques as applied to experimental time-correlated, single-photon counting data

    DOE PAGES

    Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; ...

    2016-02-10

    The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less

  10. Assessing neglect dyslexia with compound words.

    PubMed

    Reinhart, Stefan; Schunck, Alexander; Schaadt, Anna Katharina; Adams, Michaela; Simon, Alexandra; Kerkhoff, Georg

    2016-10-01

    The neglect syndrome is frequently associated with neglect dyslexia (ND), which is characterized by omissions or misread initial letters of single words. ND is usually assessed with standardized reading texts in clinical settings. However, particularly in the chronic phase of ND, patients often report reading deficits in everyday situations but show (nearly) normal performances in test situations that are commonly well-structured. To date, sensitive and standardized tests to assess the severity and characteristics of ND are lacking, although reading is of high relevance for daily life and vocational settings. Several studies found modulating effects of different word features on ND. We combined those features in a novel test to enhance test sensitivity in the assessment of ND. Low-frequency words of different length that contain residual pronounceable words when the initial letter strings are neglected were selected. We compared these words in a group of 12 ND-patients suffering from right-hemispheric first-ever stroke with word stimuli containing no existing residual words. Finally, we tested whether the serially presented words are more sensitive for the diagnosis of ND than text reading. The severity of ND was modulated strongly by the ND-test words and error frequencies in single word reading of ND words were on average more than 10 times higher than in a standardized text reading test (19.8% vs. 1.8%). The novel ND-test maximizes the frequency of specific ND-errors and is therefore more sensitive for the assessment of ND than conventional text reading tasks. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Specificity of Atmosphere Correction of Satellite Ocean Color Data in Far-Eastern Region

    NASA Astrophysics Data System (ADS)

    Trusenkova, O.; Kachur, V.; Aleksanin, A. I.

    2016-02-01

    It was carried out an error analysis of satellite reflectance coefficients (Rrs) of MODIS/AQUA colour data for two atmospheric correction algorithms (NIR, MUMM) in the Far-Eastern region. Some sets of unique data of in situ and satellite measurements have been analysed. A set has some measurements with ASD spectroradiometer for each satellite pass. The measurement allocations were selected so the Chlorophyll-a concentration has high variability. Analysis of arbitrary set demonstrated that the main error component is systematic error, and it has simple relations on Rrs values. The reasons of such error behavior are considered. The most probable explanation of the large errors of oceanic color parameters in the Far-Eastern region is the ability of high concentrations of continental aerosol. A comparison of satellite and in situ measurements at AERONET stations of USA and South Korea regions has been made. It was shown that for NIR-correction of the atmosphere influence the error values in these two regions have differences up to 10 times for almost the same water turbidity and relatively good accuracy of computation of aerosol optical thickness. The study was supported by grant Russian Scientific Foundation No. 14-50-00034, by grant of Russian Foundation of Basic Research No.15-35-21032-mol-a-ved, and by Program of Basic Research "Far East" of Far Eastern Branch of Russian Academy of Sciences.

  12. Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido

    2013-01-01

    Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.

  13. Animation and radiobiological analysis of 3D motion in conformal radiotherapy.

    PubMed

    MacKay, R I; Graham, P A; Moore, C J; Logue, J P; Sharrock, P J

    1999-07-01

    To allow treatment plans to be evaluated against the range of expected organ motion and set up error anticipated during treatment. Planning tools have been developed to allow concurrent animation and radiobiological analysis of three dimensional (3D) target and organ motion in conformal radiotherapy. Surfaces fitted to structures outlined on CT studies are projected onto pre-treatment images or onto megavoltage images collected during the patient treatment. Visual simulation of tumour and normal tissue movement is then performed by the application of three dimensional affine transformations, to the selected surface. Concurrent registration of the surface motion with the 3D dose distribution allows calculation of the change in dose to the volume. Realistic patterns of motion can be applied to the structure to simulate inter-fraction motion and set-up error. The biologically effective dose for the structure is calculated for each fraction as the surface moves over the course of the treatment and is used to calculate the normal tissue complication probability (NTCP) or tumour control probability (TCP) for the moving structure. The tool has been used to evaluate conformal therapy plans against set up measurements recorded during patient treatments. NTCP and TCP were calculated for a patient whose set up had been corrected after systematic deviations from plan geometry were measured during treatment, the effect of not making the correction were also assessed. TCP for the moving tumour was reduced if inadequate margins were set for the treatment. Modelling suggests that smaller margins could have been set for the set up corrected during the course of the treatment. The NTCP for the rectum was also higher for the uncorrected set up due to a more rectal tissue falling in the high dose region. This approach provides a simple way for clinical users to utilise information incrementally collected throughout the whole of a patient's treatment. In particular it is possible to test the robustness of a patient plan against a range of possible motion patterns. The methods described represent a move from the inspection of static pre-treatment plans to a review of the dynamic treatment.

  14. Multiple Intravenous Infusions Phase 2b: Laboratory Study

    PubMed Central

    Pinkney, Sonia; Fan, Mark; Chan, Katherine; Koczmara, Christine; Colvin, Christopher; Sasangohar, Farzan; Masino, Caterina; Easty, Anthony; Trbovich, Patricia

    2014-01-01

    Background Administering multiple intravenous (IV) infusions to a single patient via infusion pump occurs routinely in health care, but there has been little empirical research examining the risks associated with this practice or ways to mitigate those risks. Objectives To identify the risks associated with multiple IV infusions and assess the impact of interventions on nurses’ ability to safely administer them. Data Sources and Review Methods Forty nurses completed infusion-related tasks in a simulated adult intensive care unit, with and without interventions (i.e., repeated-measures design). Results Errors were observed in completing common tasks associated with the administration of multiple IV infusions, including the following (all values from baseline, which was current practice): setting up and programming multiple primary continuous IV infusions (e.g., 11.7% programming errors) identifying IV infusions (e.g., 7.7% line-tracing errors) managing dead volume (e.g., 96.0% flush rate errors following IV syringe dose administration) setting up a secondary intermittent IV infusion (e.g., 11.3% secondary clamp errors) administering an IV pump bolus (e.g., 11.5% programming errors) Of 10 interventions tested, 6 (1 practice, 3 technology, and 2 educational) significantly decreased or even eliminated errors compared to baseline. Limitations The simulation of an adult intensive care unit at 1 hospital limited the ability to generalize results. The study results were representative of nurses who received training in the interventions but had little experience using them. The longitudinal effects of the interventions were not studied. Conclusions Administering and managing multiple IV infusions is a complex and risk-prone activity. However, when a patient requires multiple IV infusions, targeted interventions can reduce identified risks. A combination of standardized practice, technology improvements, and targeted education is required. PMID:26316919

  15. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  16. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  17. Simultaneous determination of emamectin and ivermectin residues in Atlantic salmon muscle by liquid chromatography with fluorescence detection.

    PubMed

    van de Riet, J M; Brothers, N N; Pearce, J N; Burns, B G

    2001-01-01

    A liquid chromatographic (LC) method for determining residues of the antiparasitic drugs emamectin (EMA) and ivermectin (IVR) in fish tissues has been developed. EMA and IVR residues are extracted with acetonitrile and cleaned up on a C18 solid-phase extraction column. Extracts are derivatized with 1-methylimidazole and trifluoroacetic anhydride and the components are determined by LC on a C18 reversed-phase column with fluorescence detection (excitation: 365 nm, emission: 470 nm). The mobile phase is 94% acetonitrile-water run isocratically. Calibration curves were linear between 1 and 32 ng injected for both EMA and IVR. The limit of detection for both analytes was 0.5 ng/g, with a limit of quantitation of 1.5 ng/g. Recoveries of EMA and IVR added to salmon muscle averaged 96 +/- 9% and 86 +/- 6%, respectively, at levels between 5 and 80 ng/g. The percent relative standard deviation for the described method was less than 7% over the range of concentrations studied. The operational errors, interferences, and recoveries for fortified samples compare favorably with an established IVR method. The recommended method is simple, rapid, and specific for monitoring residues of EMA and IVR in Atlantic salmon muscle.

  18. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  19. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    NASA Astrophysics Data System (ADS)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.

  20. A gamma-ray testing technique for spacecraft. [considering cosmic radiation effects

    NASA Technical Reports Server (NTRS)

    Gribov, B. S.; Repin, N. N.; Sakovich, V. A.; Sakharov, V. M.

    1977-01-01

    The simulated cosmic radiation effect on a spacecraft structure is evaluated by gamma ray testing in relation to structural thickness. A drawing of the test set-up is provided and measurement errors are discussed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Fan, Q; Lei, Y

    Purpose: In-Water-Output-Ratio (IWOR) plays a significant role in linac-based radiotherapy treatment planning, linking MUs to delivered radiation dose. For an open rectangular field, IWOR depends on both its width and length, and changes rapidly when one of them becomes small. In this study, a universal functional form is proposed to fit the open field IWOR tables in Varian TrueBeam representative datasets for all photon energies. Methods: A novel Generalized Mean formula is first used to estimate the Equivalent Square (ES) for a rectangular field. The formula’s weighting factor and power index are determined by collapsing all data points as muchmore » as possible onto a single curve in IWOR vs. ES plot. The result is then fitted with a novel universal function IWOR=1+b*Log(ES/10cm)/(ES/10cm)^c via a least-square procedure to determine the optimal values for parameters b and c. The maximum relative residual error in IWOR over the entire two-dimensional measurement table with field sizes between 3cm and 40cm is used to evaluate the quality of fit for the function. Results: The two-step fitting strategy works very well in determining the optimal parameter values for open field IWOR of each photon energies in the Varian data-set. Relative residual error ≤0.71% is achieved for all photon energies (including Flattening-Filter-Free modes) with field sizes between 3cm and 40cm. The optimal parameter values change smoothly with regular photon beam quality. Conclusion: The universal functional form fits the Varian TrueBeam open field IWOR measurement tables accurately with small relative residual errors for all photon energies. Therefore, it can be an excellent choice to represent IWOR in absolute dose and MU calculations. The functional form can also be used as a QA/commissioning tool to verify the measured data quality and consistency by checking the IWOR data behavior against the function for new photon energies with arbitrary beam quality.« less

  2. Tuning and Robustness Analysis for the Orion Absolute Navigation System

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; Zanetti, Renato; D'Souza, Christopher

    2013-01-01

    The Orion Multi-Purpose Crew Vehicle (MPCV) is currently under development as NASA's next-generation spacecraft for exploration missions beyond Low Earth Orbit. The MPCV is set to perform an orbital test flight, termed Exploration Flight Test 1 (EFT-1), some time in late 2014. The navigation system for the Orion spacecraft is being designed in a Multi-Organizational Design Environment (MODE) team including contractor and NASA personnel. The system uses an Extended Kalman Filter to process measurements and determine the state. The design of the navigation system has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to show the efforts made to-date in tuning the filter for the EFT-1 mission and instilling appropriate robustness into the system to meet the requirements of manned space ight. Filter performance is affected by many factors: data rates, sensor measurement errors, tuning, and others. This paper focuses mainly on the error characterization and tuning portion. Traditional efforts at tuning a navigation filter have centered around the observation/measurement noise and Gaussian process noise of the Extended Kalman Filter. While the Orion MODE team must certainly address those factors, the team is also looking at residual edit thresholds and measurement underweighting as tuning tools. Tuning analysis is presented with open loop Monte-Carlo simulation results showing statistical errors bounded by the 3-sigma filter uncertainty covariance. The Orion filter design uses 24 Exponentially Correlated Random Variable (ECRV) parameters to estimate the accel/gyro misalignment and nonorthogonality. By design, the time constant and noise terms of these ECRV parameters were set to manufacturer specifications and not used as tuning parameters. They are included in the filter as a more analytically correct method of modeling uncertainties than ad-hoc tuning of the process noise. Tuning is explored for the powered-flight ascent phase, where measurements are scarce and unmodelled vehicle accelerations dominate. On orbit, there are important trade-off cases between process and measurement noise. On entry, there are considerations about trading performance accuracy for robustness. Process Noise is divided into powered flight and coasting ight and can be adjusted for each phase and mode of the Orion EFT-1 mission. Measurement noise is used for the integrated velocity measurements during pad alignment. It is also used for Global Positioning System (GPS) pseudorange and delta- range measurements during the rest of the flight. The robustness effort has been focused on maintaining filter convergence and performance in the presence of unmodeled error sources. These include unmodeled forces on the vehicle and uncorrected errors on the sensor measurements. Orion uses a single-frequency, non-keyed GPS receiver, so the effects due to signal distortion in Earth's ionosphere and troposphere are present in the raw measurements. Results are presented showing the efforts to compensate for these errors as well as characterize the residual effect for measurement noise tuning. Another robustness tool in use is tuning the residual edit thresholds. The trade-off between noise tuning and edit thresholds is explored in the context of robustness to errors in dynamics models and sensor measurements. Measurement underweighting is also presented as a method of additional robustness when processing highly accurate measurements in the presence of large filter uncertainties.

  3. Sampling design for groundwater solute transport: Tests of methods and analysis of Cape Cod tracer test data

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.

    1991-01-01

    Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.

  4. Tool Wear Monitoring Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu

    A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.

  5. Managing residual refractive error after cataract surgery.

    PubMed

    Sáles, Christopher S; Manche, Edward E

    2015-06-01

    We present a review of keratorefractive and intraocular approaches to managing residual astigmatic and spherical refractive error after cataract surgery, including laser in situ keratomileusis (LASIK), photorefractive keratectomy (PRK), arcuate keratotomy, intraocular lens (IOL) exchange, piggyback IOLs, and light-adjustable IOLs. Currently available literature suggests that laser vision correction, whether LASIK or PRK, yields more effective and predictable outcomes than intraocular surgery. Piggyback IOLs with a rounded-edge profile implanted in the sulcus may be superior to IOL exchange, but both options present potential risks that likely outweigh the refractive benefits except in cases with large residual spherical errors. The light-adjustable IOL may provide an ideal treatment to pseudophakic ametropia by obviating the need for secondary invasive procedures after cataract surgery, but it is not widely available nor has it been sufficiently studied. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  6. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, Chad R. R. N.; Kemp, Robert A. de, E-mail: RAdeKemp@ottawaheart.ca; Klein, Ran

    Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers wasmore » resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET–CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Conclusions: Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.« less

  7. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  8. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  9. A description of medication errors reported by pharmacists in a neonatal intensive care unit.

    PubMed

    Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila

    2017-02-01

    Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.

  10. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  11. Residual Sleep Disturbances in Patients Remitted From Major Depressive Disorder: A 4-Year Naturalistic Follow-up Study

    PubMed Central

    Li, Shirley X.; Lam, Siu P.; Chan, Joey W. Y.; Yu, Mandy W. M.; Wing, Yun-Kwok

    2012-01-01

    Study Objectives: To investigate the prevalence and clinical, psychosocial, and functional correlates of residual sleep disturbances in remitted depressed outpatients. Design: A 4-yr prospective observational study in a cohort of psychiatric outpatients with major depressive disorder was conducted with a standardized diagnostic psychiatric interview and a packet of questionnaires, including a sleep questionnaire, Hospital Anxiety and Depression Scale, NEO personality inventory, and Short Form-12 Health Survey. Settings: A university-affiliated psychiatric outpatient clinic. Interventions: N/A Measurements and Results: Four hundred twenty-one depressed outpatients were recruited at baseline, and 371 patients (mean age 44.6 ± 10.4 yr, female 81.8%; response rate 88.1%) completed the reassessments, in which 41% were classified as remitted cases. One year prevalence of frequent insomnia at baseline and follow-up in remitted patients was 38.0% and 19.3%, respectively. One year prevalence of frequent nightmares at baseline and follow-up was 24.0% and 9.3%, respectively. Remitted patients with residual insomnia were more likely to be divorced (P < 0.05) and scored higher on the anxiety subscale (P < 0.05). Remitted patients with residual nightmares were younger (P < 0.05) and scored higher on neuroticism (P < 0.05) and anxiety subscales (P < 0.01). Residual insomnia and nightmares were associated with various aspects of impaired quality of life. Residual nightmares was associated with suicidal ideation (odds ratio = 8.40; 95% confidence interval 1.79-39.33). Conclusions: Residual sleep disturbances, including insomnia and nightmares, were commonly reported in remitted depressed patients with impaired quality of life and suicidal ideation. A constellation of psychosocial and personality factors, baseline sleep disturbances, and comorbid anxiety symptoms may account for the residual sleep disturbances. Routine assessment and management of sleep symptoms are indicated in the integrated management of depression. Citation: Li SX; Lam SP; Chan JWY; Yu MWM; Wing YK. Residual sleep disturbances in patients remitted from major depressive disorder: a 4-year naturalistic follow-up study. SLEEP 2012;35(8):1153-1161. PMID:22851811

  12. New Parameters for Higher Accuracy in the Computation of Binding Free Energy Differences upon Alanine Scanning Mutagenesis on Protein-Protein Interfaces.

    PubMed

    Simões, Inês C M; Costa, Inês P D; Coimbra, João T S; Ramos, Maria J; Fernandes, Pedro A

    2017-01-23

    Knowing how proteins make stable complexes enables the development of inhibitors to preclude protein-protein (P:P) binding. The identification of the specific interfacial residues that mostly contribute to protein binding, denominated as hot spots, is thus critical. Here, we refine an in silico alanine scanning mutagenesis protocol, based on a residue-dependent dielectric constant version of the Molecular Mechanics/Poisson-Boltzmann Surface Area method. We have used a large data set of structurally diverse P:P complexes to redefine the residue-dependent dielectric constants used in the determination of binding free energies. The accuracy of the method was validated through comparison with experimental data, considering the per-residue P:P binding free energy (ΔΔG binding ) differences upon alanine mutation. Different protocols were tested, i.e., a geometry optimization protocol and three molecular dynamics (MD) protocols: (1) one using explicit water molecules, (2) another with an implicit solvation model, and (3) a third where we have carried out an accelerated MD with explicit water molecules. Using a set of protein dielectric constants (within the range from 1 to 20) we showed that the dielectric constants of 7 for nonpolar and polar residues and 11 for charged residues (and histidine) provide optimal ΔΔG binding predictions. An overall mean unsigned error (MUE) of 1.4 kcal mol -1 relative to the experiment was achieved in 210 mutations only with geometry optimization, which was further reduced with MD simulations (MUE of 1.1 kcal mol -1 for the MD employing explicit solvent). This recalibrated method allows for a better computational identification of hot spots, avoiding expensive and time-consuming experiments or thermodynamic integration/ free energy perturbation/ uBAR calculations, and will hopefully help new drug discovery campaigns in their quest of searching spots of interest for binding small drug-like molecules at P:P interfaces.

  13. Ketamine Effects on Memory Reconsolidation Favor a Learning Model of Delusions

    PubMed Central

    Gardner, Jennifer M.; Piggot, Jennifer S.; Turner, Danielle C.; Everitt, Jessica C.; Arana, Fernando Sergio; Morgan, Hannah L.; Milton, Amy L.; Lee, Jonathan L.; Aitken, Michael R. F.; Dickinson, Anthony; Everitt, Barry J.; Absalom, Anthony R.; Adapa, Ram; Subramanian, Naresh; Taylor, Jane R.; Krystal, John H.; Fletcher, Paul C.

    2013-01-01

    Delusions are the persistent and often bizarre beliefs that characterise psychosis. Previous studies have suggested that their emergence may be explained by disturbances in prediction error-dependent learning. Here we set up complementary studies in order to examine whether such a disturbance also modulates memory reconsolidation and hence explains their remarkable persistence. First, we quantified individual brain responses to prediction error in a causal learning task in 18 human subjects (8 female). Next, a placebo-controlled within-subjects study of the impact of ketamine was set up on the same individuals. We determined the influence of this NMDA receptor antagonist (previously shown to induce aberrant prediction error signal and lead to transient alterations in perception and belief) on the evolution of a fear memory over a 72 hour period: they initially underwent Pavlovian fear conditioning; 24 hours later, during ketamine or placebo administration, the conditioned stimulus (CS) was presented once, without reinforcement; memory strength was then tested again 24 hours later. Re-presentation of the CS under ketamine led to a stronger subsequent memory than under placebo. Moreover, the degree of strengthening correlated with individual vulnerability to ketamine's psychotogenic effects and with prediction error brain signal. This finding was partially replicated in an independent sample with an appetitive learning procedure (in 8 human subjects, 4 female). These results suggest a link between altered prediction error, memory strength and psychosis. They point to a core disruption that may explain not only the emergence of delusional beliefs but also their persistence. PMID:23776445

  14. Accounting for hardware imperfections in EIT image reconstruction algorithms.

    PubMed

    Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert

    2007-07-01

    Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.

  15. Why a simulation system doesn`t match the plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, R.

    1998-03-01

    Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less

  16. Predicting Transmembrane Helix Packing Arrangements using Residue Contacts and a Force-Directed Algorithm

    PubMed Central

    Nugent, Timothy; Jones, David T.

    2010-01-01

    Alpha-helical transmembrane proteins constitute roughly 30% of a typical genome and are involved in a wide variety of important biological processes including cell signalling, transport of membrane-impermeable molecules and cell recognition. Despite significant efforts to predict transmembrane protein topology, comparatively little attention has been directed toward developing a method to pack the helices together. Here, we present a novel approach to predict lipid exposure, residue contacts, helix-helix interactions and finally the optimal helical packing arrangement of transmembrane proteins. Using molecular dynamics data, we have trained and cross-validated a support vector machine (SVM) classifier to predict per residue lipid exposure with 69% accuracy. This information is combined with additional features to train a second SVM to predict residue contacts which are then used to determine helix-helix interaction with up to 65% accuracy under stringent cross-validation on a non-redundant test set. Our method is also able to discriminate native from decoy helical packing arrangements with up to 70% accuracy. Finally, we employ a force-directed algorithm to construct the optimal helical packing arrangement which demonstrates success for proteins containing up to 13 transmembrane helices. This software is freely available as source code from http://bioinf.cs.ucl.ac.uk/memsat/mempack/. PMID:20333233

  17. Numerical simulation of time delay Interferometry for LISA with one arm dysfunctional

    NASA Astrophysics Data System (ADS)

    Ni, Wei-Tou; Dhurandhar, Sanjeev V.; Nayak, K. Rajesh; Wang, Gang

    In order to attain the requisite sensitivity for LISA, laser frequency noise must be suppressed below the secondary noises such as the optical path noise, acceleration noise etc. In a previous paper(a), we have found an infinite family of second generation analytic solutions of time delay interferometry and estimated the laser noise due to residual time delay semi-analytically from orbit perturbations due to earth. Since other planets and solar-system bodies also perturb the orbits of LISA spacecraft and affect the time delay interferometry, we simulate the time delay numerically in this paper. To conform to the actual LISA planning, we have worked out a set of 10-year optimized mission orbits of LISA spacecraft using CGC3 ephemeris framework(b). Here we use this numerical solution to calculate the residual errors in the second generation solutions upto n 3 of our previous paper, and compare with the semi-analytic error estimate. The accuracy of this calculation is better than 1 m (or 30 ns). (a) S. V. Dhurandhar, K. Rajesh Nayak and J.-Y. Vinet, time delay Interferometry for LISA with one arm dysfunctional (b) W.-T. Ni and G. Wang, Orbit optimization for 10-year LISA mission orbit starting at 21 June, 2021 using CGC3 ephemeris framework

  18. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  19. A Multipixel Time Series Analysis Method Accounting for Ground Motion, Atmospheric Noise, and Orbital Errors

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2018-02-01

    Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.

  20. Comparison of Online 6 Degree-of-Freedom Image Registration of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Intracranial Radiosurgery.

    PubMed

    Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong

    2017-06-01

    The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.

  1. Error model for the SAO 1969 standard earth.

    NASA Technical Reports Server (NTRS)

    Martin, C. F.; Roy, N. A.

    1972-01-01

    A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.

  2. A concept for a visual computer interface to make error taxonomies useful at the point of primary care.

    PubMed

    Singh, Ranjit; Pace, Wilson; Singh, Sonjoy; Singh, Ashok; Singh, Gurdev

    2007-01-01

    Evidence suggests that the quality of care delivered by the healthcare industry currently falls far short of its capabilities. Whilst most patient safety and quality improvement work to date has focused on inpatient settings, some estimates suggest that outpatient settings are equally important, with up to 200,000 avoidable deaths annually in the United States of America (USA) alone. There is currently a need for improved error reporting and taxonomy systems that are useful at the point of care. This provides an opportunity to harness the benefits of computer visualisation to help structure and illustrate the 'stories' behind errors. In this paper we present a concept for a visual taxonomy of errors, based on visual models of the healthcare system at both macrosystem and microsystem levels (previously published in this journal), and describe how this could be used to create a visual database of errors. In an alphatest in a US context, we were able to code a sample of 20 errors from an existing error database using the visual taxonomy. The approach is designed to capture and disseminate patient safety information in an unambiguous format that is useful to all members of the healthcare team (including the patient) at the point of care as well as at the policy-making level.

  3. Flood-frequency prediction methods for unregulated streams of Tennessee, 2000

    USGS Publications Warehouse

    Law, George S.; Tasker, Gary D.

    2003-01-01

    Up-to-date flood-frequency prediction methods for unregulated, ungaged rivers and streams of Tennessee have been developed. Prediction methods include the regional-regression method and the newer region-of-influence method. The prediction methods were developed using stream-gage records from unregulated streams draining basins having from 1 percent to about 30 percent total impervious area. These methods, however, should not be used in heavily developed or storm-sewered basins with impervious areas greater than 10 percent. The methods can be used to estimate 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence-interval floods of most unregulated rural streams in Tennessee. A computer application was developed that automates the calculation of flood frequency for unregulated, ungaged rivers and streams of Tennessee. Regional-regression equations were derived by using both single-variable and multivariable regional-regression analysis. Contributing drainage area is the explanatory variable used in the single-variable equations. Contributing drainage area, main-channel slope, and a climate factor are the explanatory variables used in the multivariable equations. Deleted-residual standard error for the single-variable equations ranged from 32 to 65 percent. Deleted-residual standard error for the multivariable equations ranged from 31 to 63 percent. These equations are included in the computer application to allow easy comparison of results produced by the different methods. The region-of-influence method calculates multivariable regression equations for each ungaged site and recurrence interval using basin characteristics from 60 similar sites selected from the study area. Explanatory variables that may be used in regression equations computed by the region-of-influence method include contributing drainage area, main-channel slope, a climate factor, and a physiographic-region factor. Deleted-residual standard error for the region-of-influence method tended to be only slightly smaller than those for the regional-regression method and ranged from 27 to 62 percent.

  4. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  5. Surgical Options for the Refractive Correction of Keratoconus: Myth or Reality

    PubMed Central

    Zaldivar, R.; Aiello, F.; Madrid-Costa, D.

    2017-01-01

    Keratoconus provides a decrease of quality of life to the patients who suffer from it. The treatment used as well as the method to correct the refractive error of these patients may influence on the impact of the disease on their quality of life. The purpose of this review is to describe the evidence about the conservative surgical treatment for keratoconus aiming to therapeutic and refractive effect. The visual rehabilitation for keratoconic corneas requires addressing three concerns: halting the ectatic process, improving corneal shape, and minimizing the residual refractive error. Cross-linking can halt the disease progression, intrastromal corneal ring segments can improve the corneal shape and hence the visual quality and reduce the refractive error, PRK can correct mild-moderate refractive error, and intraocular lenses can correct from low to high refractive error associated with keratoconus. Any of these surgical options can be performed alone or combined with the other techniques depending on what the case requires. Although it could be considered that the surgical option for the refracto-therapeutic treatment of the keratoconus is a reality, controlled, randomized studies with larger cohorts and longer follow-up periods are needed to determine which refractive procedure and/or sequence are most suitable for each case. PMID:29403662

  6. Precision improving of double beam shadow moiré interferometer by phase shifting interferometry for the stress of flexible substrate

    NASA Astrophysics Data System (ADS)

    Huang, Kuo-Ting; Chen, Hsi-Chao; Lin, Ssu-Fan; Lin, Ke-Ming; Syue, Hong-Ye

    2012-09-01

    While tin-doped indium oxide (ITO) has been extensively applied in flexible electronics, the problem of the residual stress has many obstacles to overcome. This study investigated the residual stress of flexible electronics by the double beam shadow moiré interferometer, and focused on the precision improvement with phase shifting interferometry (PSI). According to the out-of-plane displacement equation, the theoretical error depends on the grating pitch and the angle between incident light and CCD. The angle error could be reduced to 0.03% by the angle shift of 10° as a result of the double beam interferometer was a symmetrical system. But the experimental error of the double beam moiré interferometer still reached to 2.2% by the noise of the vibration and interferograms. In order to improve the measurement precision, PSI was introduced to the double shadow moiré interferometer. Wavefront phase was reconstructed by the five interferograms with the Hariharan algorithm. The measurement results of standard cylinder indicating the error could be reduced from 2.2% to less than 1% with PSI. The deformation of flexible electronic could be reconstructed fast and calculated the residual stress with the Stoney correction formula. This shadow moiré interferometer with PSI could improve the precision of residual stress for flexible electronics.

  7. Microplastic deformation of polycrystalline iron and molybdenum subjected to high-current electron-beam irradiation

    NASA Astrophysics Data System (ADS)

    Dudarev, E. F.; Pochivalova, G. P.; Proskurovskii, D. I.; Rotshtein, V. P.; Markov, A. B.

    1996-03-01

    A technique for determination of residual stresses at various distances from the irradiated surface is proposed. It is established for iron and molybdenum that compressive stresses are set up under irradiation by low-energy high-current electron beams and that their values decrease sharply with increasing distance from the surface. The residual stresses are much smaller in absolute magnitude than those operating during irradiation. It is shown that the change in resistance to microplastic deformation on irradiation with low-energy high-current electron beams is governed not only by formation of a gradient dislocation substructure in the surface layer, but also by the residual stresses and the appearance of the Bauschinger effect.

  8. Detection of digital FSK using a phase-locked loop

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Simon, M. K.

    1975-01-01

    A theory is presented for the design of a digital FSK receiver which employs a phase-locked loop to set up the desired matched filter as the arriving signal frequency switches. The developed mathematical model makes it possible to establish the error probability performance of systems which employ a class of digital FM modulations. The noise mechanism which accounts for decision errors is modeled on the basis of the Meyr distribution and renewal Markov process theory.

  9. On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation

    NASA Astrophysics Data System (ADS)

    Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier

    2018-01-01

    Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.

  10. The Effect of Hole Quality on the Fatigue Life of 2024-T3 Aluminum Alloy Sheet

    NASA Technical Reports Server (NTRS)

    Everett, Richard A., Jr.

    2004-01-01

    This paper presents the results of a study whose main objective was to determine which type of fabrication process would least affect the fatigue life of an open-hole structural detail. Since the open-hole detail is often the fundamental building block for determining the stress concentration of built-up structural parts, it is important to understand any factor that can affect the fatigue life of an open hole. A test program of constant-amplitude fatigue tests was conducted on five different sets of test specimens each made using a different hole fabrication process. Three of the sets used different mechanical drilling procedures while a fourth and fifth set were mechanically drilled and then chemically polished. Two sets of specimens were also tested under spectrum loading to aid in understanding the effects of residual compressive stresses on fatigue life. Three conclusions were made from this study. One, the residual compressive stresses caused by the hole-drilling process increased the fatigue life by two to three times over specimens that were chemically polished after the holes were drilled. Second, the chemical polishing process does not appear to adversely affect the fatigue life. Third, the chemical polishing process will produce a stress-state adjacent to the hole that has insignificant machining residual stresses.

  11. Comprehensive profiling and marker identification in non-volatile citrus oil residues by mass spectrometry and nuclear magnetic resonance.

    PubMed

    Marti, Guillaume; Boccard, Julien; Mehl, Florence; Debrus, Benjamin; Marcourt, Laurence; Merle, Philippe; Delort, Estelle; Baroux, Lucie; Sommer, Horst; Rudaz, Serge; Wolfender, Jean-Luc

    2014-05-01

    The detailed characterization of cold-pressed lemon oils (CPLOs) is of great importance for the flavor and fragrance (F&F) industry. Since a control of authenticity by standard analytical techniques can be bypassed using elaborated adulterated oils to pretend a higher quality, a combination of advanced orthogonal methods has been developed. The present study describes a combined metabolomic approach based on UHPLC-TOF-MS profiling and (1)H NMR fingerprinting to highlight metabolite differences on a set of representative samples used in the F&F industry. A new protocol was set up and adapted to the use of CPLO residues. Multivariate analysis based on both fingerprinting methods showed significant chemical variations between Argentinian and Italian samples. Discriminating markers identified in mixtures belong to furocoumarins, flavonoids, terpenoids and fatty acids. Quantitative NMR revealed low citropten and high bergamottin content in Italian samples. The developed metabolomic approach applied to CPLO residues gives some new perspectives for authenticity assessment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. NHEXAS PHASE I ARIZONA STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKE SAMPLES

    EPA Science Inventory

    The Metals in Spike Samples data set contains the analytical results of measurements of up to 11 metals in 38 control samples (spikes) from 18 households. Measurements were made in spiked samples of dust, food, beverages, blood, urine, and dermal wipe residue. Spiked samples we...

  13. Using a whole farm model to determine the impacts of mating management on the profitability of pasture-based dairy farms.

    PubMed

    Beukes, P C; Burke, C R; Levy, G; Tiddy, R M

    2010-08-01

    An approach to assessing likely impacts of altering reproductive performance on productivity and profitability in pasture-based dairy farms is described. The basis is the development of a whole farm model (WFM) that simulates the entire farm system and holistically links multiple physical performance factors to profitability. The WFM consists of a framework that links a mechanistic cow model, a pasture model, a crop model, management policies and climate. It simulates individual cows and paddocks, and runs on a day time-step. The WFM was upgraded to include reproductive modeling capability using reference tables and empirical equations describing published relationships between cow factors, physiology and mating management. It predicts reproductive status at any time point for individual cows within a modeled herd. The performance of six commercial pasture-based dairy farms was simulated for the period of 12 months beginning 1 June 2005 (05/06 year) to evaluate the accuracy of the model by comparison with actual outcomes. The model predicted most key performance indicators within an acceptable range of error (residual<10% of observed). The evaluated WFM was then used for the six farms to estimate the profitability of changes in farm "set-up" (farm conditions at the start of the farming year on 1 June) and mating management from 05/06 to 06/07 year. Among the six farms simulated, the 4-week calving rate emerged as an important set-up factor influencing profitability, while reproductive performance during natural bull mating was identified as an area with the greatest opportunity for improvement. The WFM presents utility to explore alternative management strategies to predict likely outcomes to proposed changes to a pasture-based farm system. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  14. Test-to-Test Repeatability of Results From a Subsonic Wing-Body Configuration in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Pendergraft, Odis C., Jr.

    2000-01-01

    Results from three wind tunnel tests in the National Transonic Facility of a model of an advanced-technology, subsonic-transport wing-body configuration have been analyzed to assess the test-to-test repeatability of several aerodynamic parameters. The scatter, as measured by the prediction interval, in the longitudinal force and moment coefficients increases as the Mach number increases. Residual errors with and without the ESP tubes installed suggest a bias leading to lower drag with the tubes installed. Residual errors as well as average values of the longitudinal force and moment coefficients show that there are small bias errors between the different tests.

  15. Evaluation of Aster Images for Characterization and Mapping of Amethyst Mining Residues

    NASA Astrophysics Data System (ADS)

    Markoski, P. R.; Rolim, S. B. A.

    2012-07-01

    The objective of this work was to evaluate the potential of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), subsystems VNIR (Visible and Near Infrared) and SWIR (Short Wave Infrared) images, for discrimination and mapping of amethyst mining residues (basalt) in the Ametista do Sul Region, Rio Grande do Sul State, Brazil. This region provides the most part of amethyst mining of the World. The basalt is extracted during the mining process and deposited outside the mine. As a result, mounts of residues (basalt) rise up. These mounts are many times smaller than ASTER pixel size (VNIR - 15 meters and SWIR - 30 meters). Thus, the pixel composition becomes a mixing of various materials, hampering its identification and mapping. Trying to solve this problem, multispectral algorithm Maximum Likelihood (MaxVer) and the hyperspectral technique SAM (Spectral Angle Mapper) were used in this work. Images from ASTER subsystems VNIR and SWIR were used to perform the classifications. SAM technique produced better results than MaxVer algorithm. The main error found by the techniques was the mixing between "shadow" and "mining residues/basalt" classes. With the SAM technique the confusion decreased because it employed the basalt spectral curve as a reference, while the multispectral techniques employed pixels groups that could have spectral mixture with other targets. The results showed that in tropical terrains as the study area, ASTER data can be efficacious for the characterization of mining residues.

  16. Estimating the variance and integral scale of the transmissivity field using head residual increments

    USGS Publications Warehouse

    Zheng, Li; Silliman, Stephen E.

    2000-01-01

    A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first‐order solution is developed for the case of a transmissivity field which is isotropic and whose second‐order behavior can be characterized by an exponential covariance structure. The estimates of the variance σY2 and the integral scale λ of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two‐dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (σY2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both λ and σY2 in aquifers with the value of σY2 up to 2.0, but the errors in those estimates were higher for σY2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.

  17. Developing Performance Estimates for High Precision Astrometry with TMT

    NASA Astrophysics Data System (ADS)

    Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana

    2013-12-01

    Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.

  18. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  19. Method for the fabrication error calibration of the CGH used in the cylindrical interferometry system

    NASA Astrophysics Data System (ADS)

    Wang, Qingquan; Yu, Yingjie; Mou, Kebing

    2016-10-01

    This paper presents a method of absolutely calibrating the fabrication error of the CGH in the cylindrical interferometry system for the measurement of cylindricity error. First, a simulated experimental system is set up in ZEMAX. On one hand, the simulated experimental system has demonstrated the feasibility of the method we proposed. On the other hand, by changing the different positions of the mirror in the simulated experimental system, a misalignment aberration map, consisting of the different interferograms in different positions, is acquired. And it can be acted as a reference for the experimental adjustment in real system. Second, the mathematical polynomial, which describes the relationship between the misalignment aberrations and the possible misalignment errors, is discussed.

  20. First clinical experience in carbon ion scanning beam therapy: retrospective analysis of patient positional accuracy.

    PubMed

    Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi

    2012-09-01

    Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.

  1. Using polarizable POSSIM force field and fuzzy-border continuum solvent model to calculate pK(a) shifts of protein residues.

    PubMed

    Sharma, Ity; Kaminski, George A

    2017-01-15

    Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuangrod, T; Simpson, J; Greer, P

    Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less

  3. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  4. The Coast Artillery Journal. Volume 68, Number 6, June 1928

    DTIC Science & Technology

    1928-06-01

    text book on the Turkish Army, and two guide books. The maps available were not up to date and contained few details. Writing in his diary under date...The moon shone faintly through the clouds and at 1 :00 A. M. the ships stopped, waiting for the moon to set. While lying here, all six men-of-war...against the firing table value of 63 yards. The prohable error of the prohahle error was then determined in the same manner that we ordinarily compute the

  5. Tracking fin whales in the northeast Pacific Ocean with a seafloor seismic network.

    PubMed

    Wilcock, William S D

    2012-10-01

    Ocean bottom seismometer (OBS) networks represent a tool of opportunity to study fin and blue whales. A small OBS network on the Juan de Fuca Ridge in the northeast Pacific Ocean in ~2.3 km of water recorded an extensive data set of 20-Hz fin whale calls. An automated method has been developed to identify arrival times based on instantaneous frequency and amplitude and to locate calls using a grid search even in the presence of a few bad arrival times. When only one whale is calling near the network, tracks can generally be obtained up to distances of ~15 km from the network. When the calls from multiple whales overlap, user supervision is required to identify tracks. The absolute and relative amplitudes of arrivals and their three-component particle motions provide additional constraints on call location but are not useful for extending the distance to which calls can be located. The double-difference method inverts for changes in relative call locations using differences in residuals for pairs of nearby calls recorded on a common station. The method significantly reduces the unsystematic component of the location error, especially when inconsistencies in arrival time observations are minimized by cross-correlation.

  6. TU-H-207A-02: Relative Importance of the Various Factors Influencing the Accuracy of Monte Carlo Simulated CT Dose Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marous, L; Muryn, J; Liptak, C

    2016-06-15

    Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less

  7. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.

  8. Robust Adaptive Beamforming with Sensor Position Errors Using Weighted Subspace Fitting-Based Covariance Matrix Reconstruction.

    PubMed

    Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang

    2018-05-08

    When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.

  9. Analysis of target wavefront error for secondary mirror of a spaceborne telescope

    NASA Astrophysics Data System (ADS)

    Chang, Shenq-Tsong; Lin, Wei-Cheng; Kuo, Ching-Hsiang; Chan, Chia-Yen; Lin, Yu-Chuan; Huang, Ting-Ming

    2014-09-01

    During the fabrication of an aspherical mirror, the inspection of the residual wavefront error is critical. In the program of a spaceborne telescope development, primary mirror is made of ZERODUR with clear aperture of 450 mm. The mass is 10 kg after lightweighting. Deformation of mirror due to gravity is expected; hence uniform supporting measured by load cells has been applied to reduce the gravity effect. Inspection has been taken to determine the residual wavefront error at the configuration of mirror face upwards. Correction polishing has been performed according to the measurement. However, after comparing with the data measured by bench test while the primary mirror is at a configuration of mirror face horizontal, deviations have been found for the two measurements. Optical system that is not able to meet the requirement is predicted according to the measured wavefront error by bench test. A target wavefront error of secondary mirror is therefore analyzed to correct that of primary mirror. Optical performance accordingly is presented.

  10. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  11. Evaluation of RSA set-up from a clinical biplane fluoroscopy system for 3D joint kinematic analysis.

    PubMed

    Bonanzinga, Tommaso; Signorelli, Cecilia; Bontempi, Marco; Russo, Alessandro; Zaffagnini, Stefano; Marcacci, Maurilio; Bragonzoni, Laura

    2016-01-01

    dinamic roentgen stereophotogrammetric analysis (RSA), a technique currently based only on customized radiographic equipment, has been shown to be a very accurate method for detecting three-dimensional (3D) joint motion. The aim of the present work was to evaluate the applicability of an innovative RSA set-up for in vivo knee kinematic analysis, using a biplane fluoroscopic image system. To this end, the Authors describe the set-up as well as a possible protocol for clinical knee joint evaluation. The accuracy of the kinematic measurements is assessed. the Authors evaluated the accuracy of 3D kinematic analysis of the knee in a new RSA set-up, based on a commercial biplane fluoroscopy system integrated into the clinical environment. The study was organized in three main phases: an in vitro test under static conditions, an in vitro test under dynamic conditions reproducing a flexion-extension range of motion (ROM), and an in vivo analysis of the flexion-extension ROM. For each test, the following were calculated, as an indication of the tracking accuracy: mean, minimum, maximum values and standard deviation of the error of rigid body fitting. in terms of rigid body fitting, in vivo test errors were found to be 0.10±0.05 mm. Phantom tests in static and kinematic conditions showed precision levels, for translations and rotations, of below 0.1 mm/0.2° and below 0.5 mm/0.3° respectively for all directions. the results of this study suggest that kinematic RSA can be successfully performed using a standard clinical biplane fluoroscopy system for the acquisition of slow movements of the lower limb. a kinematic RSA set-up using a clinical biplane fluoroscopy system is potentially applicable and provides a useful method for obtaining better characterization of joint biomechanics.

  12. The effects of time-varying observation errors on semi-empirical sea-level projections

    DOE PAGES

    Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.; ...

    2016-11-30

    Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less

  13. The effects of time-varying observation errors on semi-empirical sea-level projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruckert, Kelsey L.; Guan, Yawen; Bakker, Alexander M. R.

    Sea-level rise is a key driver of projected flooding risks. The design of strategies to manage these risks often hinges on projections that inform decision-makers about the surrounding uncertainties. Producing semi-empirical sea-level projections is difficult, for example, due to the complexity of the error structure of the observations, such as time-varying (heteroskedastic) observation errors and autocorrelation of the data-model residuals. This raises the question of how neglecting the error structure impacts hindcasts and projections. Here, we quantify this effect on sea-level projections and parameter distributions by using a simple semi-empirical sea-level model. Specifically, we compare three model-fitting methods: a frequentistmore » bootstrap as well as a Bayesian inversion with and without considering heteroskedastic residuals. All methods produce comparable hindcasts, but the parametric distributions and projections differ considerably based on methodological choices. In conclusion, our results show that the differences based on the methodological choices are enhanced in the upper tail projections. For example, the Bayesian inversion accounting for heteroskedasticity increases the sea-level anomaly with a 1% probability of being equaled or exceeded in the year 2050 by about 34% and about 40% in the year 2100 compared to a frequentist bootstrap. These results indicate that neglecting known properties of the observation errors and the data-model residuals can lead to low-biased sea-level projections.« less

  14. Residual aneurysm after metal coils treatment detected by spectral CT

    PubMed Central

    Wang, Yang; Gao, Xiaolei; Lu, Aixun; Zhou, Zhengyang; Li, Baoxin

    2012-01-01

    Digital subtraction angiography (DSA) is currently the gold standard for diagnosing the residue or recurrence of aneurysm after treatment, especially in the presence of metal coils. However, DSA is an invasive procedure which may cause additional trauma and economic burden to patients. Spectral CT imaging, as a newly introduced CT imaging mode, produces monochromatic image sets that is able to reduce beam-hardening and other metal-related artifacts, and has found its use in several clinical applications including brain imaging to reduce beam-hardening artifacts. In this study, we describe a case of spectral CT imaging in follow-up of the metal coils treatment and detection of a small leaf of residual aneurysm after metal coils treatment. PMID:23256074

  15. Composability-Centered Convolutional Neural Network Pruning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Xipeng; Guan, Hui; Lim, Seung-Hwan

    This work studies the composability of the building blocks ofstructural CNN models (e.g., GoogleLeNet and Residual Networks) in thecontext of network pruning. We empirically validate that a networkcomposed of pre-trained building blocks (e.g. residual blocks andInception modules) not only gives a better initial setting fortraining, but also allows the training process to converge at asignificantly higher accuracy in much less time. Based on thatinsight, we propose a {\\em composability-centered} design for CNNnetwork pruning. Experiments show that this new scheme shortens theconfiguration process in CNN network pruning by up to 186.8X forResNet-50 and up to 30.2X for Inception-V3, and meanwhile, themore » modelsit finds that meet the accuracy requirement are significantly morecompact than those found by default schemes.« less

  16. SU-C-BRD-02: A Team Focused Clinical Implementation and Failure Mode and Effects Analysis of HDR Skin Brachytherapy Using Valencia and Leipzig Surface Applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayler, E; Harrison, A; Eldredge-Hindy, H

    Purpose: and Leipzig applicators (VLAs) are single-channel brachytherapy surface applicators used to treat skin lesions up to 2cm diameter. Source dwell times can be calculated and entered manually after clinical set-up or ultrasound. This procedure differs dramatically from CT-based planning; the novelty and unfamiliarity could lead to severe errors. To build layers of safety and ensure quality, a multidisciplinary team created a protocol and applied Failure Modes and Effects Analysis (FMEA) to the clinical procedure for HDR VLA skin treatments. Methods: team including physicists, physicians, nurses, therapists, residents, and administration developed a clinical procedure for VLA treatment. The procedure wasmore » evaluated using FMEA. Failure modes were identified and scored by severity, occurrence, and detection. The clinical procedure was revised to address high-scoring process nodes. Results: Several key components were added to the clinical procedure to minimize risk probability numbers (RPN): -Treatments are reviewed at weekly QA rounds, where physicians discuss diagnosis, prescription, applicator selection, and set-up. Peer review reduces the likelihood of an inappropriate treatment regime. -A template for HDR skin treatments was established in the clinical EMR system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planning physicist, and increases the detectability of an error during the physics second check. -A screen check was implemented during the second check to increase detectability of an error. -To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display. This facilitates data entry and verification. -VLAs are color-coded and labeled to match the EMR prescriptions, which simplifies in-room selection and verification. Conclusion: Multidisciplinary planning and FMEA increased delectability and reduced error probability during VLA HDR Brachytherapy. This clinical model may be useful to institutions implementing similar procedures.« less

  17. Dosimetric consequences of translational and rotational errors in frame-less image-guided radiosurgery

    PubMed Central

    2012-01-01

    Background To investigate geometric and dosimetric accuracy of frame-less image-guided radiosurgery (IG-RS) for brain metastases. Methods and materials Single fraction IG-RS was practiced in 72 patients with 98 brain metastases. Patient positioning and immobilization used either double- (n = 71) or single-layer (n = 27) thermoplastic masks. Pre-treatment set-up errors (n = 98) were evaluated with cone-beam CT (CBCT) based image-guidance (IG) and were corrected in six degrees of freedom without an action level. CBCT imaging after treatment measured intra-fractional errors (n = 64). Pre- and post-treatment errors were simulated in the treatment planning system and target coverage and dose conformity were evaluated. Three scenarios of 0 mm, 1 mm and 2 mm GTV-to-PTV (gross tumor volume, planning target volume) safety margins (SM) were simulated. Results Errors prior to IG were 3.9 mm ± 1.7 mm (3D vector) and the maximum rotational error was 1.7° ± 0.8° on average. The post-treatment 3D error was 0.9 mm ± 0.6 mm. No differences between double- and single-layer masks were observed. Intra-fractional errors were significantly correlated with the total treatment time with 0.7mm±0.5mm and 1.2mm±0.7mm for treatment times ≤23 minutes and >23 minutes (p<0.01), respectively. Simulation of RS without image-guidance reduced target coverage and conformity to 75% ± 19% and 60% ± 25% of planned values. Each 3D set-up error of 1 mm decreased target coverage and dose conformity by 6% and 10% on average, respectively, with a large inter-patient variability. Pre-treatment correction of translations only but not rotations did not affect target coverage and conformity. Post-treatment errors reduced target coverage by >5% in 14% of the patients. A 1 mm safety margin fully compensated intra-fractional patient motion. Conclusions IG-RS with online correction of translational errors achieves high geometric and dosimetric accuracy. Intra-fractional errors decrease target coverage and conformity unless compensated with appropriate safety margins. PMID:22531060

  18. Protein 3D Structure Computed from Evolutionary Sequence Variation

    PubMed Central

    Sheridan, Robert; Hopf, Thomas A.; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2011-01-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing. In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy. We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues., including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7–4.8 Å Cα-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein structures, new strategies in protein and drug design, and the identification of functional genetic variants in normal and disease genomes. PMID:22163331

  19. Metabolic biotransformation half-lives in fish: QSAR modeling and consensus analysis.

    PubMed

    Papa, Ester; van der Wal, Leon; Arnot, Jon A; Gramatica, Paola

    2014-02-01

    Bioaccumulation in fish is a function of competing rates of chemical uptake and elimination. For hydrophobic organic chemicals bioconcentration, bioaccumulation and biomagnification potential are high and the biotransformation rate constant is a key parameter. Few measured biotransformation rate constant data are available compared to the number of chemicals that are being evaluated for bioaccumulation hazard and for exposure and risk assessment. Three new Quantitative Structure-Activity Relationships (QSARs) for predicting whole body biotransformation half-lives (HLN) in fish were developed and validated using theoretical molecular descriptors that seek to capture structural characteristics of the whole molecule and three data set splitting schemes. The new QSARs were developed using a minimal number of theoretical descriptors (n=9) and compared to existing QSARs developed using fragment contribution methods that include up to 59 descriptors. The predictive statistics of the models are similar thus further corroborating the predictive performance of the different QSARs; Q(2)ext ranges from 0.75 to 0.77, CCCext ranges from 0.86 to 0.87, RMSE in prediction ranges from 0.56 to 0.58. The new QSARs provide additional mechanistic insights into the biotransformation capacity of organic chemicals in fish by including whole molecule descriptors and they also include information on the domain of applicability for the chemical of interest. Advantages of consensus modeling for improving overall prediction and minimizing false negative errors in chemical screening assessments, for identifying potential sources of residual error in the empirical HLN database, and for identifying structural features that are not well represented in the HLN dataset to prioritize future testing needs are illustrated. © 2013.

  20. Analysis of Monoclonal Antibodies in Human Serum as a Model for Clinical Monoclonal Gammopathy by Use of 21 Tesla FT-ICR Top-Down and Middle-Down MS/MS

    NASA Astrophysics Data System (ADS)

    He, Lidong; Anderson, Lissa C.; Barnidge, David R.; Murray, David L.; Hendrickson, Christopher L.; Marshall, Alan G.

    2017-05-01

    With the rapid growth of therapeutic monoclonal antibodies (mAbs), stringent quality control is needed to ensure clinical safety and efficacy. Monoclonal antibody primary sequence and post-translational modifications (PTM) are conventionally analyzed with labor-intensive, bottom-up tandem mass spectrometry (MS/MS), which is limited by incomplete peptide sequence coverage and introduction of artifacts during the lengthy analysis procedure. Here, we describe top-down and middle-down approaches with the advantages of fast sample preparation with minimal artifacts, ultrahigh mass accuracy, and extensive residue cleavages by use of 21 tesla FT-ICR MS/MS. The ultrahigh mass accuracy yields an RMS error of 0.2-0.4 ppm for antibody light chain, heavy chain, heavy chain Fc/2, and Fd subunits. The corresponding sequence coverages are 81%, 38%, 72%, and 65% with MS/MS RMS error 4 ppm. Extension to a monoclonal antibody in human serum as a monoclonal gammopathy model yielded 53% sequence coverage from two nano-LC MS/MS runs. A blind analysis of five therapeutic monoclonal antibodies at clinically relevant concentrations in human serum resulted in correct identification of all five antibodies. Nano-LC 21 T FT-ICR MS/MS provides nonpareil mass resolution, mass accuracy, and sequence coverage for mAbs, and sets a benchmark for MS/MS analysis of multiple mAbs in serum. This is the first time that extensive cleavages for both variable and constant regions have been achieved for mAbs in a human serum background.

  1. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKE SAMPLES

    EPA Science Inventory

    The Metals in Spike Samples data set contains the analytical results of measurements of up to 11 metals in 15 control samples (spikes) from 11 households. Measurements were made in spiked samples of dust, food, and dermal wipe residue. Spiked samples were used to assess recover...

  2. Water Utility Lime Sludge Reuse – An Environmental Sorbent for Power Utilities

    EPA Science Inventory

    Lime sludge can be used as an environmental sorbent to remove sulfur dioxide (SO2) and acid gases, by the ultra-fine CaCO3 particles, and to sequester mercury and other heavy metals, by the Natural Organic Matter and residual activated carbon. The laboratory experimental set up ...

  3. Optimising in situ gamma measurements to identify the presence of radioactive particles in land areas.

    PubMed

    Rostron, Peter D; Heathcote, John A; Ramsey, Michael H

    2014-12-01

    High-coverage in situ surveys with gamma detectors are the best means of identifying small hotspots of activity, such as radioactive particles, in land areas. Scanning surveys can produce rapid results, but the probabilities of obtaining false positive or false negative errors are often unknown, and they may not satisfy other criteria such as estimation of mass activity concentrations. An alternative is to use portable gamma-detectors that are set up at a series of locations in a systematic sampling pattern, where any positive measurements are subsequently followed up in order to determine the exact location, extent and nature of the target source. The preliminary survey is typically designed using settings of detector height, measurement spacing and counting time that are based on convenience, rather than using settings that have been calculated to meet requirements. This paper introduces the basis of a repeatable method of setting these parameters at the outset of a survey, for pre-defined probabilities of false positive and false negative errors in locating spatially small radioactive particles in land areas. It is shown that an un-collimated detector is more effective than a collimated detector that might typically be used in the field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  5. Multivariate Statistics Applied to Seismic Phase Picking

    NASA Astrophysics Data System (ADS)

    Velasco, A. A.; Zeiler, C. P.; Anderson, D.; Pingitore, N. E.

    2008-12-01

    The initial effort of the Seismogram Picking Error from Analyst Review (SPEAR) project has been to establish a common set of seismograms to be picked by the seismological community. Currently we have 13 analysts from 4 institutions that have provided picks on the set of 26 seismograms. In comparing the picks thus far, we have identified consistent biases between picks from different institutions; effects of the experience of analysts; and the impact of signal-to-noise on picks. The institutional bias in picks brings up the important concern that picks will not be the same between different catalogs. This difference means less precision and accuracy when combing picks from multiple institutions. We also note that depending on the experience level of the analyst making picks for a catalog the error could fluctuate dramatically. However, the experience level is based off of number of years in picking seismograms and this may not be an appropriate criterion for determining an analyst's precision. The common data set of seismograms provides a means to test an analyst's level of precision and biases. The analyst is also limited by the quality of the signal and we show that the signal-to-noise ratio and pick error are correlated to the location, size and distance of the event. This makes the standard estimate of picking error based on SNR more complex because additional constraints are needed to accurately constrain the measurement error. We propose to extend the current measurement of error by adding the additional constraints of institutional bias and event characteristics to the standard SNR measurement. We use multivariate statistics to model the data and provide constraints to accurately assess earthquake location and measurement errors.

  6. Soil moisture assimilation using a modified ensemble transform Kalman filter with water balance constraint

    NASA Astrophysics Data System (ADS)

    Wu, Guocan; Zheng, Xiaogu; Dan, Bo

    2016-04-01

    The shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. The forecast error is inflated to improve the analysis state accuracy and the water balance constraint is adopted to reduce the water budget residual in the assimilation procedure. The experiment results illustrate that the adaptive forecast error inflation can reduce the analysis error, while the proper inflation layer can be selected based on the -2log-likelihood function of the innovation statistic. The water balance constraint can result in reducing water budget residual substantially, at a low cost of assimilation accuracy loss. The assimilation scheme can be potentially applied to assimilate the remote sensing data.

  7. Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Mittman, David S.; Shams, Khawaja, S.; Bachmann, Andrew G.; Ludowise, Melissa

    2013-01-01

    This software simplifies the process of having to set up an Eclipse IDE programming environment for the members of the cross-NASA center project, Ensemble. It achieves this by assembling all the necessary add-ons and custom tools/preferences. This software is unique in that it allows developers in the Ensemble Project (approximately 20 to 40 at any time) across multiple NASA centers to set up a development environment almost instantly and work on Ensemble software. The software automatically has the source code repositories and other vital information and settings included. The Eclipse IDE is an open-source development framework. The NASA (Ensemble-specific) version of the software includes Ensemble-specific plug-ins as well as settings for the Ensemble project. This software saves developers the time and hassle of setting up a programming environment, making sure that everything is set up in the correct manner for Ensemble development. Existing software (i.e., standard Eclipse) requires an intensive setup process that is both time-consuming and error prone. This software is built once by a single user and tested, allowing other developers to simply download and use the software

  8. Evaluation of genomic high-throughput sequencing data generated on Illumina HiSeq and Genome Analyzer systems

    PubMed Central

    2011-01-01

    Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484

  9. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  10. Quality assessment of MEG-to-MRI coregistrations

    NASA Astrophysics Data System (ADS)

    Sonntag, Hermann; Haueisen, Jens; Maess, Burkhard

    2018-04-01

    For high precision in source reconstruction of magnetoencephalography (MEG) or electroencephalography data, high accuracy of the coregistration of sources and sensors is mandatory. Usually, the source space is derived from magnetic resonance imaging (MRI). In most cases, however, no quality assessment is reported for sensor-to-MRI coregistrations. If any, typically root mean squares (RMS) of point residuals are provided. It has been shown, however, that RMS of residuals do not correlate with coregistration errors. We suggest using target registration error (TRE) as criterion for the quality of sensor-to-MRI coregistrations. TRE measures the effect of uncertainty in coregistrations at all points of interest. In total, 5544 data sets with sensor-to-head and 128 head-to-MRI coregistrations, from a single MEG laboratory, were analyzed. An adaptive Metropolis algorithm was used to estimate the optimal coregistration and to sample the coregistration parameters (rotation and translation). We found an average TRE between 1.3 and 2.3 mm at the head surface. Further, we observed a mean absolute difference in coregistration parameters between the Metropolis and iterative closest point algorithm of (1.9 +/- 15){\\hspace{0pt}}\\circ and (1.1 +/- 9) m. A paired sample t-test indicated a significant improvement in goal function minimization by using the Metropolis algorithm. The sampled parameters allowed computation of TRE on the entire grid of the MRI volume. Hence, we recommend the Metropolis algorithm for head-to-MRI coregistrations.

  11. Input Shaping to Reduce Solar Array Structural Vibrations

    NASA Technical Reports Server (NTRS)

    Doherty, Michael J.; Tolson, Robert J.

    1998-01-01

    Structural vibrations induced by actuators can be minimized using input shaping. Input shaping is a feedforward method in which actuator commands are convolved with shaping functions to yield a shaped set of commands. These commands are designed to perform the maneuver while minimizing the residual structural vibration. In this report, input shaping is extended to stepper motor actuators. As a demonstration, an input-shaping technique based on pole-zero cancellation was used to modify the Solar Array Drive Assembly (SADA) actuator commands for the Lewis satellite. A series of impulses were calculated as the ideal SADA output for vibration control. These impulses were then discretized for use by the SADA stepper motor actuator and simulated actuator outputs were used to calculate the structural response. The effectiveness of input shaping is limited by the accuracy of the knowledge of the modal frequencies. Assuming perfect knowledge resulted in significant vibration reduction. Errors of 10% in the modal frequencies caused notably higher levels of vibration. Controller robustness was improved by incorporating additional zeros in the shaping function. The additional zeros did not require increased performance from the actuator. Despite the identification errors, the resulting feedforward controller reduced residual vibrations to the level of the exactly modeled input shaper and well below the baseline cases. These results could be easily applied to many other vibration-sensitive applications involving stepper motor actuators.

  12. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  13. Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan

    2015-04-01

    An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs

  14. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods

    NASA Astrophysics Data System (ADS)

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.

  15. Soil Moisture Active Passive Mission L4_SM Data Product Assessment (Version 2 Validated Release)

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf Helmut; De Lannoy, Gabrielle J. M.; Liu, Qing; Ardizzone, Joseph V.; Chen, Fan; Colliander, Andreas; Conaty, Austin; Crow, Wade; Jackson, Thomas; Kimball, John; hide

    2016-01-01

    During the post-launch SMAP calibration and validation (Cal/Val) phase there are two objectives for each science data product team: 1) calibrate, verify, and improve the performance of the science algorithm, and 2) validate the accuracy of the science data product as specified in the science requirements and according to the Cal/Val schedule. This report provides an assessment of the SMAP Level 4 Surface and Root Zone Soil Moisture Passive (L4_SM) product specifically for the product's public Version 2 validated release scheduled for 29 April 2016. The assessment of the Version 2 L4_SM data product includes comparisons of SMAP L4_SM soil moisture estimates with in situ soil moisture observations from core validation sites and sparse networks. The assessment further includes a global evaluation of the internal diagnostics from the ensemble-based data assimilation system that is used to generate the L4_SM product. This evaluation focuses on the statistics of the observation-minus-forecast (O-F) residuals and the analysis increments. Together, the core validation site comparisons and the statistics of the assimilation diagnostics are considered primary validation methodologies for the L4_SM product. Comparisons against in situ measurements from regional-scale sparse networks are considered a secondary validation methodology because such in situ measurements are subject to up-scaling errors from the point-scale to the grid cell scale of the data product. Based on the limited set of core validation sites, the wide geographic range of the sparse network sites, and the global assessment of the assimilation diagnostics, the assessment presented here meets the criteria established by the Committee on Earth Observing Satellites for Stage 2 validation and supports the validated release of the data. An analysis of the time average surface and root zone soil moisture shows that the global pattern of arid and humid regions are captured by the L4_SM estimates. Results from the core validation site comparisons indicate that "Version 2" of the L4_SM data product meets the self-imposed L4_SM accuracy requirement, which is formulated in terms of the ubRMSE: the RMSE (Root Mean Square Error) after removal of the long-term mean difference. The overall ubRMSE of the 3-hourly L4_SM surface soil moisture at the 9 km scale is 0.035 cubic meters per cubic meter requirement. The corresponding ubRMSE for L4_SM root zone soil moisture is 0.024 cubic meters per cubic meter requirement. Both of these metrics are comfortably below the 0.04 cubic meters per cubic meter requirement. The L4_SM estimates are an improvement over estimates from a model-only SMAP Nature Run version 4 (NRv4), which demonstrates the beneficial impact of the SMAP brightness temperature data. L4_SM surface soil moisture estimates are consistently more skillful than NRv4 estimates, although not by a statistically significant margin. The lack of statistical significance is not surprising given the limited data record available to date. Root zone soil moisture estimates from L4_SM and NRv4 have similar skill. Results from comparisons of the L4_SM product to in situ measurements from nearly 400 sparse network sites corroborate the core validation site results. The instantaneous soil moisture and soil temperature analysis increments are within a reasonable range and result in spatially smooth soil moisture analyses. The O-F residuals exhibit only small biases on the order of 1-3 degrees Kelvin between the (re-scaled) SMAP brightness temperature observations and the L4_SM model forecast, which indicates that the assimilation system is largely unbiased. The spatially averaged time series standard deviation of the O-F residuals is 5.9 degrees Kelvin, which reduces to 4.0 degrees Kelvin for the observation-minus-analysis (O-A) residuals, reflecting the impact of the SMAP observations on the L4_SM system. Averaged globally, the time series standard deviation of the normalized O-F residuals is close to unity, which would suggest that the magnitude of the modeled errors approximately reflects that of the actual errors. The assessment report also notes several limitations of the "Version 2" L4_SM data product and science algorithm calibration that will be addressed in future releases. Regionally, the time series standard deviation of the normalized O-F residuals deviates considerably from unity, which indicates that the L4_SM assimilation algorithm either over- or under-estimates the actual errors that are present in the system. Planned improvements include revised land model parameters, revised error parameters for the land model and the assimilated SMAP observations, and revised surface meteorological forcing data for the operational period and underlying climatological data. Moreover, a refined analysis of the impact of SMAP observations will be facilitated by the construction of additional variants of the model-only reference data. Nevertheless, the “Version 2” validated release of the L4_SM product is sufficiently mature and of adequate quality for distribution to and use by the larger science and application communities.

  16. Superconducting quantum circuits at the surface code threshold for fault tolerance.

    PubMed

    Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M

    2014-04-24

    A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.

  17. On the statistical assessment of classifiers using DNA microarray data

    PubMed Central

    Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F

    2006-01-01

    Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171

  18. Strain Gage Load Calibration of the Wing Interface Fittings for the Adaptive Compliant Trailing Edge Flap Flight Test

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.

    2014-01-01

    The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.

  19. High resolution mapping of the binding site on human IgG1 for Fc gamma RI, Fc gamma RII, Fc gamma RIII, and FcRn and design of IgG1 variants with improved binding to the Fc gamma R.

    PubMed

    Shields, R L; Namenuk, A K; Hong, K; Meng, Y G; Rae, J; Briggs, J; Xie, D; Lai, J; Stadlen, A; Li, B; Fox, J A; Presta, L G

    2001-03-02

    Immunoglobulin G (IgG) Fc receptors play a critical role in linking IgG antibody-mediated immune responses with cellular effector functions. A high resolution map of the binding site on human IgG1 for human Fc gamma RI, Fc gamma RIIA, Fc gamma RIIB, Fc gamma RIIIA, and FcRn receptors has been determined. A common set of IgG1 residues is involved in binding to all Fc gamma R; Fc gamma RII and Fc gamma RIII also utilize residues outside this common set. In addition to residues which, when altered, abrogated binding to one or more of the receptors, several residues were found that improved binding only to specific receptors or simultaneously improved binding to one type of receptor and reduced binding to another type. Select IgG1 variants with improved binding to Fc gamma RIIIA exhibited up to 100% enhancement in antibody-dependent cell cytotoxicity using human effector cells; these variants included changes at residues not found at the binding interface in the IgG/Fc gamma RIIIA co-crystal structure (Sondermann, P., Huber, R., Oosthuizen, V., and Jacob, U. (2000) Nature 406, 267-273). These engineered antibodies may have important implications for improving antibody therapeutic efficacy.

  20. Time Orientation and 10 Years Risk of Dementia in Elderly Adults: The Three-City Study.

    PubMed

    Dumurgier, Julien; Dartigues, Jean-François; Gabelle, Audrey; Paquet, Claire; Prevot, Magali; Hugon, Jacques; Tzourio, Christophe

    2016-07-01

    Time disorientation is commonly observed in dementia, however very little is known about the pathological significance of minor time errors in community-dwelling population. Our objective was to investigate the relationship between time orientation and risk of dementia in a population of older adults. Analyses relies on 8611 dementia-free subjects from the Three-City Study, France. Participants were followed up for 10 years for incident dementia. Time orientation was assessed by asking for the date, the day of the week, the month, the season and the year. At baseline, 905 subjects made at least one error in time orientation. During 57,073 person-years of follow-up, 827 participants developed dementia. After controlling for age, gender and education level, subjects with one error in time had a greater risk of dementia (hazard ratio [HR] 1.44 [1.18-1.77]), while those with at least 2 errors had a more than three-fold increased risk (HR 3.10 [1.98-4.83]). This association was particularly marked for the diagnosis of probable Alzheimer's disease. Time disorientation was associated with an increased risk of dementia in a large population of cognitively normal older people followed during up to 10 years and should not be underestimated in clinical setting.

  1. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review.

    PubMed

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme--this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the 'principle of maximum conformality' (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the 'principle of minimum sensitivity' (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R(e+e-) and [Formula: see text] up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  2. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    NASA Astrophysics Data System (ADS)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  3. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  4. Pitfalls of insulin pump clocks: technical glitches that may potentially affect medical care in patients with diabetes.

    PubMed

    Aldasouqi, Saleh A; Reed, Amy J

    2014-11-01

    The objective was to raise awareness about the importance of ensuring that insulin pumps internal clocks are set up correctly at all times. This is a very important safety issue because all commercially available insulin pumps are not GPS-enabled (though this is controversial), nor equipped with automatically adjusting internal clocks. Special attention is paid to how basal and bolus dose errors can be introduced by daylight savings time changes, travel across time zones, and am-pm clock errors. Correct setting of insulin pump internal clock is crucial for appropriate insulin delivery. A comprehensive literature review is provided, as are illustrative cases. Incorrect setting can potentially result in incorrect insulin delivery, with potential harmful consequences, if too much or too little insulin is delivered. Daylight saving time changes may not significantly affect basal insulin delivery, given the triviality of the time difference. However, bolus insulin doses can be dramatically affected. Such problems may occur when pump wearers have large variations in their insulin to carb ratio, especially if they forget to change their pump clock in the spring. More worrisome than daylight saving time change is the am-pm clock setting. If this setting is set up incorrectly, both basal rates and bolus doses will be affected. Appropriate insulin delivery through insulin pumps requires correct correlation between dose settings and internal clock time settings. Because insulin pumps are not GPS-enabled or automatically time-adjusting, extra caution should be practiced by patients to ensure correct time settings at all times. Clinicians and diabetes educators should verify the date/time of insulin pumps during patients' visits, and should remind their patients to always verify these settings. © 2014 Diabetes Technology Society.

  5. Pitfalls of Insulin Pump Clocks

    PubMed Central

    Reed, Amy J.

    2014-01-01

    The objective was to raise awareness about the importance of ensuring that insulin pumps internal clocks are set up correctly at all times. This is a very important safety issue because all commercially available insulin pumps are not GPS-enabled (though this is controversial), nor equipped with automatically adjusting internal clocks. Special attention is paid to how basal and bolus dose errors can be introduced by daylight savings time changes, travel across time zones, and am-pm clock errors. Correct setting of insulin pump internal clock is crucial for appropriate insulin delivery. A comprehensive literature review is provided, as are illustrative cases. Incorrect setting can potentially result in incorrect insulin delivery, with potential harmful consequences, if too much or too little insulin is delivered. Daylight saving time changes may not significantly affect basal insulin delivery, given the triviality of the time difference. However, bolus insulin doses can be dramatically affected. Such problems may occur when pump wearers have large variations in their insulin to carb ratio, especially if they forget to change their pump clock in the spring. More worrisome than daylight saving time change is the am-pm clock setting. If this setting is set up incorrectly, both basal rates and bolus doses will be affected. Appropriate insulin delivery through insulin pumps requires correct correlation between dose settings and internal clock time settings. Because insulin pumps are not GPS-enabled or automatically time-adjusting, extra caution should be practiced by patients to ensure correct time settings at all times. Clinicians and diabetes educators should verify the date/time of insulin pumps during patients’ visits, and should remind their patients to always verify these settings. PMID:25355713

  6. Exploring Situational Awareness in Diagnostic Errors in Primary Care

    PubMed Central

    Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.

    2013-01-01

    Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757

  7. National Aeronautics and Space Administration "threat and error" model applied to pediatric cardiac surgery: error cycles precede ∼85% of patient deaths.

    PubMed

    Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S

    2015-02-01

    We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.

  8. Discrete-Time Zhang Neural Network for Online Time-Varying Nonlinear Optimization With Application to Manipulator Motion Generation.

    PubMed

    Jin, Long; Zhang, Yunong

    2015-07-01

    In this brief, a discrete-time Zhang neural network (DTZNN) model is first proposed, developed, and investigated for online time-varying nonlinear optimization (OTVNO). Then, Newton iteration is shown to be derived from the proposed DTZNN model. In addition, to eliminate the explicit matrix-inversion operation, the quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is introduced, which can effectively approximate the inverse of Hessian matrix. A DTZNN-BFGS model is thus proposed and investigated for OTVNO, which is the combination of the DTZNN model and the quasi-Newton BFGS method. In addition, theoretical analyses show that, with step-size h=1 and/or with zero initial error, the maximal residual error of the DTZNN model has an O(τ(2)) pattern, whereas the maximal residual error of the Newton iteration has an O(τ) pattern, with τ denoting the sampling gap. Besides, when h ≠ 1 and h ∈ (0,2) , the maximal steady-state residual error of the DTZNN model has an O(τ(2)) pattern. Finally, an illustrative numerical experiment and an application example to manipulator motion generation are provided and analyzed to substantiate the efficacy of the proposed DTZNN and DTZNN-BFGS models for OTVNO.

  9. Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains

    NASA Technical Reports Server (NTRS)

    Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

    2013-01-01

    Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

  10. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Comparison of underflight data with satellite estimates of temperature revealed significant gain calibration errors. The source of the LANDSAT 5 band 6 error and its reproducibility is not yet adequately defined. The error can be accounted for using underflight or ground truth data. When underflight data are used to correct the satellite data, the residual error for the scene studied was 1.3K when the predicted temperatures were compared to measured surface temperature.

  11. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  12. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  13. Quasi-Likelihood Techniques in a Logistic Regression Equation for Identifying Simulium damnosum s.l. Larval Habitats Intra-cluster Covariates in Togo.

    PubMed

    Jacob, Benjamin G; Novak, Robert J; Toe, Laurent; Sanfo, Moussa S; Afriyie, Abena N; Ibrahim, Mohammed A; Griffith, Daniel A; Unnasch, Thomas R

    2012-01-01

    The standard methods for regression analyses of clustered riverine larval habitat data of Simulium damnosum s.l. a major black-fly vector of Onchoceriasis, postulate models relating observational ecological-sampled parameter estimators to prolific habitats without accounting for residual intra-cluster error correlation effects. Generally, this correlation comes from two sources: (1) the design of the random effects and their assumed covariance from the multiple levels within the regression model; and, (2) the correlation structure of the residuals. Unfortunately, inconspicuous errors in residual intra-cluster correlation estimates can overstate precision in forecasted S.damnosum s.l. riverine larval habitat explanatory attributes regardless how they are treated (e.g., independent, autoregressive, Toeplitz, etc). In this research, the geographical locations for multiple riverine-based S. damnosum s.l. larval ecosystem habitats sampled from 2 pre-established epidemiological sites in Togo were identified and recorded from July 2009 to June 2010. Initially the data was aggregated into proc genmod. An agglomerative hierarchical residual cluster-based analysis was then performed. The sampled clustered study site data was then analyzed for statistical correlations using Monthly Biting Rates (MBR). Euclidean distance measurements and terrain-related geomorphological statistics were then generated in ArcGIS. A digital overlay was then performed also in ArcGIS using the georeferenced ground coordinates of high and low density clusters stratified by Annual Biting Rates (ABR). This data was overlain onto multitemporal sub-meter pixel resolution satellite data (i.e., QuickBird 0.61m wavbands ). Orthogonal spatial filter eigenvectors were then generated in SAS/GIS. Univariate and non-linear regression-based models (i.e., Logistic, Poisson and Negative Binomial) were also employed to determine probability distributions and to identify statistically significant parameter estimators from the sampled data. Thereafter, Durbin-Watson test statistics were used to test the null hypothesis that the regression residuals were not autocorrelated against the alternative that the residuals followed an autoregressive process in AUTOREG. Bayesian uncertainty matrices were also constructed employing normal priors for each of the sampled estimators in PROC MCMC. The residuals revealed both spatially structured and unstructured error effects in the high and low ABR-stratified clusters. The analyses also revealed that the estimators, levels of turbidity and presence of rocks were statistically significant for the high-ABR-stratified clusters, while the estimators distance between habitats and floating vegetation were important for the low-ABR-stratified cluster. Varying and constant coefficient regression models, ABR- stratified GIS-generated clusters, sub-meter resolution satellite imagery, a robust residual intra-cluster diagnostic test, MBR-based histograms, eigendecomposition spatial filter algorithms and Bayesian matrices can enable accurate autoregressive estimation of latent uncertainity affects and other residual error probabilities (i.e., heteroskedasticity) for testing correlations between georeferenced S. damnosum s.l. riverine larval habitat estimators. The asymptotic distribution of the resulting residual adjusted intra-cluster predictor error autocovariate coefficients can thereafter be established while estimates of the asymptotic variance can lead to the construction of approximate confidence intervals for accurately targeting productive S. damnosum s.l habitats based on spatiotemporal field-sampled count data.

  14. Migration velocity analysis using residual diffraction moveout: a real-data example

    NASA Astrophysics Data System (ADS)

    Gonzalez, Jaime A. C.; de Figueiredo, José J. S.; Coimbra, Tiago A.; Schleicher, Jörg; Novais, Amélia

    2016-08-01

    Unfocused seismic diffraction events carry direct information about errors in the migration-velocity model. The residual-diffraction-moveout (RDM) migration-velocity-analysis (MVA) method is a recent technique that extracts this information by means of adjusting ellipses or hyperbolas to uncollapsed migrated diffractions. In this paper, we apply this method, which has been tested so far only on synthetic data, to a real data set from the Viking Graben. After application of a plane-wave-destruction (PWD) filter to attenuate the reflected energy, the diffractions in the real data become interpretable and can be used for the RDM method. Our analysis demonstrates that the reflections need not be completely removed for this purpose. Beyond the need to identify and select diffraction events in post-stack migrated sections in the depth domain, the method has a very low computational cost and processing time. To reach an acceptable velocity model of comparable quality as one obtained with common-midpoint (CMP) processing, only two iterations were necessary.

  15. Measurements of the cosmic background radiation

    NASA Technical Reports Server (NTRS)

    Lubin, P.; Villela, T.

    1987-01-01

    Maps of the large scale structure (theta is greater than 6 deg) of the cosmic background radiation covering 90 percent of the sky are now available. The data show a very strong 50-100 sigma (statistical error) dipole component, interpreted as being due to our motion, with a direction of alpha = 11.5 + or - 0.15 hours, sigma = -5.6 + or - 2.0 deg. The inferred direction of the velocity of our galaxy relative to the cosmic background radiation is alpha = 10.6 + or - 0.3 hours, sigma = -2.3 + or - 5 deg. This is 44 deg from the center of the Virgo cluster. After removing the dipole component, the data show a galactic signature but no apparent residual structure. An autocorrelation of the residual data, after substraction of the galactic component from a combined Berkeley (3 mm) and Princeton (12 mm) data sets, show no apparent structure from 10 to 180 deg with a rms of 0.01 mK(sup 2). At 90 percent confidence level limit of .00007 is placed on a quadrupole component.

  16. Best quadrature formula on Sobolev class with Chebyshev weight

    NASA Astrophysics Data System (ADS)

    Xie, Congcong

    2008-05-01

    Using best interpolation function based on a given function information, we present a best quadrature rule of function on Sobolev class KWr[-1,1] with Chebyshev weight. The given function information means that the values of a function f[set membership, variant]KWr[-1,1] and its derivatives up to r-1 order at a set of nodes x are given. Error bounds are obtained, and the method is illustrated by some examples.

  17. Residual activity evaluation: a benchmark between ANITA, FISPACT, FLUKA and PHITS codes

    NASA Astrophysics Data System (ADS)

    Firpo, Gabriele; Viberti, Carlo Maria; Ferrari, Anna; Frisoni, Manuela

    2017-09-01

    The activity of residual nuclides dictates the radiation fields in periodic inspections/repairs (maintenance periods) and dismantling operations (decommissioning phase) of accelerator facilities (i.e., medical, industrial, research) and nuclear reactors. Therefore, the correct prediction of the material activation allows for a more accurate planning of the activities, in line with the ALARA (As Low As Reasonably Achievable) principles. The scope of the present work is to show the results of a comparison between residual total specific activity versus a set of cooling time instants (from zero up to 10 years after irradiation) as obtained by two analytical (FISPACT and ANITA) and two Monte Carlo (FLUKA and PHITS) codes, making use of their default nuclear data libraries. A set of 40 irradiating scenarios is considered, i.e. neutron and proton particles of different energies, ranging from zero to many hundreds MeV, impinging on pure elements or materials of standard composition typically used in industrial applications (namely, AISI SS316 and Portland concrete). In some cases, experimental results were also available for a more thorough benchmark.

  18. Measurement properties and usability of non-contact scanners for measuring transtibial residual limb volume.

    PubMed

    Kofman, Rianne; Beekman, Anna M; Emmelot, Cornelis H; Geertzen, Jan H B; Dijkstra, Pieter U

    2018-06-01

    Non-contact scanners may have potential for measurement of residual limb volume. Different non-contact scanners have been introduced during the last decades. Reliability and usability (practicality and user friendliness) should be assessed before introducing these systems in clinical practice. The aim of this study was to analyze the measurement properties and usability of four non-contact scanners (TT Design, Omega Scanner, BioSculptor Bioscanner, and Rodin4D Scanner). Quasi experimental. Nine (geometric and residual limb) models were measured on two occasions, each consisting of two sessions, thus in total 4 sessions. In each session, four observers used the four systems for volume measurement. Mean for each model, repeatability coefficients for each system, variance components, and their two-way interactions of measurement conditions were calculated. User satisfaction was evaluated with the Post-Study System Usability Questionnaire. Systematic differences between the systems were found in volume measurements. Most of the variances were explained by the model (97%), while error variance was 3%. Measurement system and the interaction between system and model explained 44% of the error variance. Repeatability coefficient of the systems ranged from 0.101 (Omega Scanner) to 0.131 L (Rodin4D). Differences in Post-Study System Usability Questionnaire scores between the systems were small and not significant. The systems were reliable in determining residual limb volume. Measurement systems and the interaction between system and residual limb model explained most of the error variances. The differences in repeatability coefficient and usability between the four CAD/CAM systems were small. Clinical relevance If accurate measurements of residual limb volume are required (in case of research), modern non-contact scanners should be taken in consideration nowadays.

  19. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--QA ANALYTICAL RESULTS FOR METALS IN BLANK SAMPLES

    EPA Science Inventory

    The Metals in Blank Samples data set contains the analytical results of measurements of up to 27 metals in 52 blank samples. Measurements were made in blank samples of dust, indoor air, food, water, and dermal wipe residue. Blank samples were used to assess the potential for sa...

  20. Cognitive Factors and Residual Speech Errors: Basic Science, Translational Research, and Some Clinical Frameworks.

    PubMed

    Eaton, Catherine Torrington

    2015-11-01

    This article explores the theoretical and empirical relationships between cognitive factors and residual speech errors (RSEs). Definitions of relevant cognitive domains are provided, as well as examples of formal and informal tasks that may be appropriate in assessment. Although studies to date have been limited in number and scope, basic research suggests that cognitive flexibility, short- and long-term memory, and self-monitoring may be areas of weakness in this population. Preliminary evidence has not supported a relationship between inhibitory control, attention, and RSEs; however, further studies that control variables such as language ability and temperament are warranted. Previous translational research has examined the effects of self-monitoring training on residual speech errors. Although results have been mixed, some findings suggest that children with RSEs may benefit from the inclusion of this training. The article closes with a discussion of clinical frameworks that target cognitive skills, including self-monitoring and attention, as a means of facilitating speech sound change. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  2. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  3. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  4. The effect of toe marker placement error on joint kinematics and muscle forces using OpenSim gait simulation.

    PubMed

    Xu, Hang; Merryweather, Andrew; Bloswick, Donald; Mao, Qi; Wang, Tong

    2015-01-01

    Marker placement can be a significant source of error in biomechanical studies of human movement. The toe marker placement error is amplified by footwear since the toe marker placement on the shoe only relies on an approximation of underlying anatomical landmarks. Three total knee replacement subjects were recruited and three self-speed gait trials per subject were collected. The height variation between toe and heel markers of four types of footwear was evaluated from the results of joint kinematics and muscle forces using OpenSim. The reference condition was considered as the same vertical height of toe and heel markers. The results showed that the residual variances for joint kinematics had an approximately linear relationship with toe marker placement error for lower limb joints. Ankle dorsiflexion/plantarflexion is most sensitive to toe marker placement error. The influence of toe marker placement error is generally larger for hip flexion/extension and rotation than hip abduction/adduction and knee flexion/extension. The muscle forces responded to the residual variance of joint kinematics to various degrees based on the muscle function for specific joint kinematics. This study demonstrates the importance of evaluating marker error for joint kinematics and muscle forces when explaining relative clinical gait analysis and treatment intervention.

  5. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  6. Nonlinear Errors Resulting from Ghost Reflection and Its Coupling with Optical Mixing in Heterodyne Laser Interferometers

    PubMed Central

    Fu, Haijin; Wang, Yue; Tan, Jiubin; Fan, Zhigang

    2018-01-01

    Even after the Heydemann correction, residual nonlinear errors, ranging from hundreds of picometers to several nanometers, are still found in heterodyne laser interferometers. This is a crucial factor impeding the realization of picometer level metrology, but its source and mechanism have barely been investigated. To study this problem, a novel nonlinear model based on optical mixing and coupling with ghost reflection is proposed and then verified by experiments. After intense investigation of this new model’s influence, results indicate that new additional high-order and negative-order nonlinear harmonics, arising from ghost reflection and its coupling with optical mixing, have only a negligible contribution to the overall nonlinear error. In real applications, any effect on the Lissajous trajectory might be invisible due to the small ghost reflectance. However, even a tiny ghost reflection can significantly worsen the effectiveness of the Heydemann correction, or even make this correction completely ineffective, i.e., compensation makes the error larger rather than smaller. Moreover, the residual nonlinear error after correction is dominated only by ghost reflectance. PMID:29498685

  7. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  8. The Effect of Amplifier Bias Drift on Differential Magnitude Estimation in Multiple-Star Systems

    NASA Astrophysics Data System (ADS)

    Tyler, David W.; Muralimanohar, Hariharan; Borelli, Kathy J.

    2007-02-01

    We show how the temporal drift of CCD amplifier bias can cause significant relative magnitude estimation error in speckle interferometric observations of multiple-star systems. When amplifier bias varies over time, the estimation error arises if the time between acquisition of dark-frame calibration data and science data is long relative to the timescale over which the bias changes. Using analysis, we show that while detector-temperature drift over time causes a variation in accumulated dark current and a residual bias in calibrated imagery, only amplifier bias variations cause a residual bias in the estimated energy spectrum. We then use telescope data taken specifically to investigate this phenomenon to show that for the detector used, temporal bias drift can cause residual energy spectrum bias as large or larger than the mean value of the noise energy spectrum. Finally, we use a computer simulation to demonstrate the effect of residual bias on differential magnitude estimation. A supplemental calibration technique is described in the appendices.

  9. Efficient Residue to Binary Conversion Based on a Modified Flexible Moduli Set

    NASA Astrophysics Data System (ADS)

    Molahosseini, Amir Sabbagh

    2011-09-01

    The Residue Number System (RNS) is a non-weighted number system which can perform addition (subtraction) and multiplication on residues without carry-propagation; resulting in high-speed hardware implementations of computation systems. The problem of converting residue numbers to equivalent binary weighted form has been attracted a lot of research for many years. Recently, some researchers proposed using flexible moduli sets instead of previous traditional moduli sets to enhance the performance of residue to binary converters. This paper introduces the modified flexible moduli set {22p+k. 22p+1, 2p+1, 2p-1} which is achieved from the flexible set {2p+k, 22p+1, 2p+1, 2p-1} by enhancing modulo 2p+k. Next, new Chinese remainder theorem-1 is used to design simple and efficient residue to binary converter for this modified set with better performance than the converter of the moduli set {2p+k, 22p+1, 2p+1, 2p-1}.

  10. Content-based multiple bitstream image transmission over noisy channels.

    PubMed

    Cao, Lei; Chen, Chang Wen

    2002-01-01

    In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.

  11. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  12. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  13. Amelioration of bauxite residue sand by intermittent additions of nitrogen fertiliser and leaching fractions: The effect on growth of kikuyu grass and fate of applied nutrients.

    PubMed

    Kaur, Navjot; Phillips, Ian; Fey, Martin V

    2016-04-15

    Bauxite residue, a waste product of aluminium processing operations is characterised by high pH, salinity and exchangeable sodium which hinders sustainable plant growth. The aim of this study was to investigate the uptake form, optimum application rate and timing of nitrogen fertiliser to improve bauxite residue characteristics for plant growth. Kikuyu grass was grown in plastic columns filled with residue sand/carbonated residue mud mixture (20:1) previously amended with gypsum, phosphoric acid and basal nutrients. The experiment was set up as a 4×4 factorial design comprising four levels of applied nitrogen (N) fertiliser (0, 3, 6 and 12mgNkg(-1) residue) and four frequencies of leaching (16, 8 and 4day intervals). We hypothesised that the use of ammonium sulfate fertiliser would increase retention of N within the rhizosphere thereby encouraging more efficient fertiliser use. We found that N uptake by kikuyu grass was enhanced due to leaching of excess salts and alkalinity from the residue profile. It was also concluded that biomass production and associated N uptake by kikuyu grass grown in residue is dependent on the type of fertiliser used. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  15. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    PubMed

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  16. Relative Proportion Of Different Types Of Refractive Errors In Subjects Seeking Laser Vision Correction.

    PubMed

    Althomali, Talal A

    2018-01-01

    Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes.

  17. Development of multiclass methods for drug residues in eggs: hydrophilic solid-phase extraction cleanup and liquid chromatography/tandem mass spectrometry analysis of tetracycline, fluoroquinolone, sulfonamide, and beta-lactam residues.

    PubMed

    Heller, David N; Nochetto, Cristina B; Rummel, Nathan G; Thomas, Michael H

    2006-07-26

    A method was developed for detection of a variety of polar drug residues in eggs via liquid chromatography/tandem mass spectrometry (LC/MS/MS) with electrospray ionization (ESI). A total of twenty-nine target analytes from four drug classes-sulfonamides, tetracyclines, fluoroquinolones, and beta-lactams-were extracted from eggs using a hydrophilic-lipophilic balance polymer solid-phase extraction (SPE) cartridge. The extraction technique was developed for use at a target concentration of 100 ng/mL (ppb), and it was applied to eggs containing incurred residues from dosed laying hens. The ESI source was tuned using a single, generic set of tuning parameters, and analytes were separated with a phenyl-bonded silica cartridge column using an LC gradient. In a related study, residues of beta-lactam drugs were not found by LC/MS/MS in eggs from hens dosed orally with beta-lactam drugs. LC/MS/MS performance was evaluated on two generations of ion trap mass spectrometers, and key operational parameters were identified for each instrument. The ion trap acquisition methods could be set up for screening (a single product ion) or confirmation (multiple product ions). The lower limit of detection for screening purposes was 10-50 ppb (sulfonamides), 10-20 ppb (fluoroquinolones), and 10-50 ppb (tetracyclines), depending on the drug, instrument, and acquisition method. Development of this method demonstrates the feasibility of generic SPE, LC, and MS conditions for multiclass LC/MS residue screening.

  18. High-speed receiver based on waveguide germanium photodetector wire-bonded to 90nm SOI CMOS amplifier.

    PubMed

    Pan, Huapu; Assefa, Solomon; Green, William M J; Kuchta, Daniel M; Schow, Clint L; Rylyakov, Alexander V; Lee, Benjamin G; Baks, Christian W; Shank, Steven M; Vlasov, Yurii A

    2012-07-30

    The performance of a receiver based on a CMOS amplifier circuit designed with 90nm ground rules wire-bonded to a waveguide germanium photodetector is characterized at data rates up to 40Gbps. Both chips were fabricated through the IBM Silicon CMOS Integrated Nanophotonics process on specialty photonics-enabled SOI wafers. At the data rate of 28Gbps which is relevant to the new generation of optical interconnects, a sensitivity of -7.3dBm average optical power is demonstrated with 3.4pJ/bit power-efficiency and 0.6UI horizontal eye opening at a bit-error-rate of 10(-12). The receiver operates error-free (bit-error-rate < 10(-12)) up to 40Gbps with optimized power supply settings demonstrating an energy efficiency of 1.4pJ/bit and 4pJ/bit at data rates of 32Gbps and 40Gbps, respectively, with an average optical power of -0.8dBm.

  19. Identification of residue pairing in interacting β-strands from a predicted residue contact map.

    PubMed

    Mao, Wenzhi; Wang, Tong; Zhang, Wenxuan; Gong, Haipeng

    2018-04-19

    Despite the rapid progress of protein residue contact prediction, predicted residue contact maps frequently contain many errors. However, information of residue pairing in β strands could be extracted from a noisy contact map, due to the presence of characteristic contact patterns in β-β interactions. This information may benefit the tertiary structure prediction of mainly β proteins. In this work, we propose a novel ridge-detection-based β-β contact predictor to identify residue pairing in β strands from any predicted residue contact map. Our algorithm RDb 2 C adopts ridge detection, a well-developed technique in computer image processing, to capture consecutive residue contacts, and then utilizes a novel multi-stage random forest framework to integrate the ridge information and additional features for prediction. Starting from the predicted contact map of CCMpred, RDb 2 C remarkably outperforms all state-of-the-art methods on two conventional test sets of β proteins (BetaSheet916 and BetaSheet1452), and achieves F1-scores of ~ 62% and ~ 76% at the residue level and strand level, respectively. Taking the prediction of the more advanced RaptorX-Contact as input, RDb 2 C achieves impressively higher performance, with F1-scores reaching ~ 76% and ~ 86% at the residue level and strand level, respectively. In a test of structural modeling using the top 1 L predicted contacts as constraints, for 61 mainly β proteins, the average TM-score achieves 0.442 when using the raw RaptorX-Contact prediction, but increases to 0.506 when using the improved prediction by RDb 2 C. Our method can significantly improve the prediction of β-β contacts from any predicted residue contact maps. Prediction results of our algorithm could be directly applied to effectively facilitate the practical structure prediction of mainly β proteins. All source data and codes are available at http://166.111.152.91/Downloads.html or the GitHub address of https://github.com/wzmao/RDb2C .

  20. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aristophanous, M; Court, L

    Purpose: Despite daily image guidance setup uncertainties can be high when treating large areas of the body. The aim of this study was to measure local uncertainties inside the PTV for patients receiving IMRT to the mediastinum region. Methods: Eleven lymphoma patients that received radiotherapy (breath-hold) to the mediastinum were included in this study. The treated region could range all the way from the neck to the diaphragm. Each patient had a CT scan with a CT-on-rails system prior to every treatment. The entire PTV region was matched to the planning CT using automatic rigid registration. The PTV was thenmore » split into 5 regions: neck, supraclavicular, superior mediastinum, upper heart, lower heart. Additional auto-registrations for each of the 5 local PTV regions were performed. The residual local setup errors were calculated as the difference between the final global PTV position and the individual final local PTV positions for the AP, SI and RL directions. For each patient 4 CT scans were analyzed (1 per week of treatment). Results: The residual mean group error (M) and standard deviation of the inter-patient (or systematic) error (Σ) were lowest in the RL direction of the superior mediastinum (0.0mm and 0.5mm) and highest in the RL direction of the lower heart (3.5mm and 2.9mm). The standard deviation of the inter-fraction (or random) error (σ) was lowest in the RL direction of the superior mediastinum (0.5mm) and highest in the SI direction of the lower heart (3.9mm) The directionality of local uncertainties is important; a superior residual error in the lower heart for example keeps it in the global PTV. Conclusion: There is a complex relationship between breath-holding and positioning uncertainties that needs further investigation. Residual setup uncertainties can be significant even under daily CT image guidance when treating large regions of the body.« less

  2. OMPS Limb Profiler Instrument Performance Assessment

    NASA Technical Reports Server (NTRS)

    Jaross, Glen R.; Bhartia, Pawan K.; Chen, Grace; Kowitt, Mark; Haken, Michael; Chen, Zhong; Xu, Philippe; Warner, Jeremy; Kelly, Thomas

    2014-01-01

    Following the successful launch of the Ozone Mapping and Profiler Suite (OMPS) aboard the Suomi National Polar-orbiting Partnership (SNPP) spacecraft, the NASA OMPS Limb team began an evaluation of instrument and data product performance. The focus of this paper is the instrument performance in relation to the original design criteria. Performance that is closer to expectations increases the likelihood that limb scatter measurements by SNPP OMPS and successor instruments can form the basis for accurate long-term monitoring of ozone vertical profiles. The team finds that the Limb instrument operates mostly as designed and basic performance meets or exceeds the original design criteria. Internally scattered stray light and sensor pointing knowledge are two design challenges with the potential to seriously degrade performance. A thorough prelaunch characterization of stray light supports software corrections that are accurate to within 1% in radiances up to 60 km for the wavelengths used in deriving ozone. Residual stray light errors at 1000nm, which is useful in retrievals of stratospheric aerosols, currently exceed 10%. Height registration errors in the range of 1 km to 2 km have been observed that cannot be fully explained by known error sources. An unexpected thermal sensitivity of the sensor also causes wavelengths and pointing to shift each orbit in the northern hemisphere. Spectral shifts of as much as 0.5nm in the ultraviolet and 5 nm in the visible, and up to 0.3 km shifts in registered height, must be corrected in ground processing.

  3. The dissociation energy of N2

    NASA Technical Reports Server (NTRS)

    Almloef, Jan; Deleeuw, Bradley J.; Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Siegbahn, Per

    1989-01-01

    The requirements for very accurate ab initio quantum chemical prediction of dissociation energies are examined using a detailed investigation of the nitrogen molecule. Although agreement with experiment to within 1 kcal/mol is not achieved even with the most elaborate multireference CI (configuration interaction) wave functions and largest basis sets currently feasible, it is possible to obtain agreement to within about 2 kcal/mol, or 1 percent of the dissociation energy. At this level it is necessary to account for core-valence correlation effects and to include up to h-type functions in the basis. The effect of i-type functions, the use of different reference configuration spaces, and basis set superposition error were also investigated. After discussing these results, the remaining sources of error in our best calculations are examined.

  4. Photonic Doppler velocimetry probe designed with stereo imaging

    NASA Astrophysics Data System (ADS)

    Malone, Robert M.; Cata, Brian M.; Daykin, Edward P.; Esquibel, David L.; Frogget, Brent C.; Holtkamp, David B.; Kaufman, Morris I.; McGillivray, Kevin D.; Palagi, Martin J.; Pazuchanics, Peter; Romero, Vincent T.; Sorenson, Danny S.

    2014-09-01

    During the fabrication of an aspherical mirror, the inspection of the residual wavefront error is critical. In the program of a spaceborne telescope development, primary mirror is made of ZERODUR with clear aperture of 450 mm. The mass is 10 kg after lightweighting. Deformation of mirror due to gravity is expected; hence uniform supporting measured by load cells has been applied to reduce the gravity effect. Inspection has been taken to determine the residual wavefront error at the configuration of mirror face upwards. Correction polishing has been performed according to the measurement. However, after comparing with the data measured by bench test while the primary mirror is at a configuration of mirror face horizontal, deviations have been found for the two measurements. Optical system that is not able to meet the requirement is predicted according to the measured wavefront error by bench test. A target wavefront error of secondary mirror is therefore analyzed to correct that of primary mirror. Optical performance accordingly is presented.

  5. LANDSAT/coastal processes

    NASA Technical Reports Server (NTRS)

    James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.

    1977-01-01

    The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.

  6. PCA determination of the radiometric noise of high spectral resolution infrared observations from spectral residuals: Application to IASI

    NASA Astrophysics Data System (ADS)

    Serio, C.; Masiello, G.; Camy-Peyret, C.; Jacquette, E.; Vandermarcq, O.; Bermudo, F.; Coppens, D.; Tobin, D.

    2018-02-01

    The problem of characterizing and estimating the instrumental or radiometric noise of satellite high spectral resolution infrared spectrometers directly from Earth observations is addressed in this paper. An approach has been developed, which relies on the Principal Component Analysis (PCA) with a suitable criterion to select the optimal number of PC scores. Different selection criteria have been set up and analysed, which is based on the estimation theory of Least Squares and/or Maximum Likelihood Principle. The approach is independent of any forward model and/or radiative transfer calculations. The PCA is used to define an orthogonal basis, which, in turn, is used to derive an optimal linear reconstruction of the observations. The residual vector that is the observation vector minus the calculated or reconstructed one is then used to estimate the instrumental noise. It will be shown that the use of the spectral residuals to assess the radiometric instrumental noise leads to efficient estimators, which are largely independent of possible departures of the true noise from that assumed a priori to model the observational covariance matrix. Application to the Infrared Atmospheric Sounder Interferometer (IASI) has been considered. A series of case studies has been set up, which make use of IASI observations. As a major result, the analysis confirms the high stability and radiometric performance of IASI. The approach also proved to be efficient in characterizing noise features due to mechanical micro-vibrations of the beam splitter of the IASI instrument.

  7. UNAERO: A package of FORTRAN subroutines for approximating unsteady aerodynamics in the time domain

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1985-01-01

    This report serves as an instruction and maintenance manual for a collection of CDC CYBER FORTRAN IV subroutines for approximating the unsteady aerodynamic forces in the time domain. The result is a set of constant-coefficient first-order differential equations that approximate the dynamics of the vehicle. Provisions are included for adjusting the number of modes used for calculating the approximations so that an accurate approximation is generated. The number of data points at different values of reduced frequency can also be varied to adjust the accuracy of the approximation over the reduced-frequency range. The denominator coefficients of the approximation may be calculated by means of a gradient method or a least-squares approximation technique. Both the approximation methods use weights on the residual error. A new set of system equations, at a different dynamic pressure, can be generated without the approximations being recalculated.

  8. Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling

    USGS Publications Warehouse

    Cordell, Lindrith

    1994-01-01

    Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.

  9. Measurement of electromagnetic tracking error in a navigated breast surgery setup

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor

    2016-03-01

    PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.

  10. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  11. Polarization errors associated with birefringent waveplates

    NASA Technical Reports Server (NTRS)

    West, Edward A.; Smith, Matthew H.

    1995-01-01

    Although zero-order quartz waveplates are widely used in instrumentation that needs good temperature and field-of-view characteristics, the residual errors associated with these devices can be very important in high-resolution polarimetry measurements. How the field-of-view characteristics are affected by retardation errors and the misalignment of optic axes in a double-crystal waveplate is discussed. The retardation measurements made on zero-order quartz and single-order 'achromatic' waveplates and how the misalignment errors affect those measurements are discussed.

  12. The Effect of Health Information Technology on Hospital Quality of Care

    ERIC Educational Resources Information Center

    Sun, Ruirui

    2016-01-01

    Health Information Technology (Health IT) is designed to store patients' records safely and clearly, to reduce input errors and missing records, and to make communications more efficiently. Concerned with the relatively lower adoption rate among the US hospitals compared to most developed countries, the Bush Administration set up the Office of…

  13. Development of a Computer-Controlled Polishing Process for X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian

    2009-01-01

    The future X-ray observatory missions require grazing-incidence x-ray optics with angular resolution of < 5 arcsec half-power diameter. The achievable resolution depends ultimately on the quality of polished mandrels from which the shells are replicated. With an aim to fabricate better shells, and reduce the cost/time of mandrel production, a computer-controlled polishing machine is developed for deterministic and localized polishing of mandrels. Cylindrical polishing software is also developed that predicts the surface residual errors under a given set of operating parameters and lap configuration. Design considerations of the polishing lap are discussed and the effects of nonconformance of the lap and the mandrel are presented.

  14. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  15. Piggyback intraocular lens implantation to correct pseudophakic refractive error after segmental multifocal intraocular lens implantation.

    PubMed

    Venter, Jan A; Oberholster, Andre; Schallhorn, Steven C; Pelouskova, Martina

    2014-04-01

    To evaluate refractive and visual outcomes of secondary piggyback intraocular lens implantation in patients diagnosed as having residual ametropia following segmental multifocal lens implantation. Data of 80 pseudophakic eyes with ametropia that underwent Sulcoflex aspheric 653L intraocular lens implantation (Rayner Intraocular Lenses Ltd., East Sussex, United Kingdom) to correct residual refractive error were analyzed. All eyes previously had in-the-bag zonal refractive multifocal intraocular lens implantation (Lentis Mplus MF30, models LS-312 and LS-313; Oculentis GmbH, Berlin, Germany) and required residual refractive error correction. Outcome measurements included uncorrected distance visual acuity, corrected distance visual acuity, uncorrected near visual acuity, distance-corrected near visual acuity, manifest refraction, and complications. One-year data are presented in this study. The mean spherical equivalent ranged from -1.75 to +3.25 diopters (D) preoperatively (mean: +0.58 ± 1.15 D) and reduced to -1.25 to +0.50 D (mean: -0.14 ± 0.28 D; P < .01). Postoperatively, 93.8% of eyes were within ±0.50 D and 98.8% were within ±1.00 D of emmetropia. The mean uncorrected distance visual acuity improved significantly from 0.28 ± 0.16 to 0.01 ± 0.10 logMAR and 78.8% of eyes achieved 6/6 (Snellen 20/20) or better postoperatively. The mean uncorrected near visual acuity changed from 0.43 ± 0.28 to 0.19 ± 0.15 logMAR. There was no significant change in corrected distance visual acuity or distance-corrected near visual acuity. No serious intraoperative or postoperative complications requiring secondary intraocular lens removal occurred. Sulcoflex lenses proved to be a predictable and safe option for correcting residual refractive error in patients diagnosed as having pseudophakia. Copyright 2014, SLACK Incorporated.

  16. Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang

    2015-03-01

    In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.

  17. Measurement and Predition Errors in Body Composition Assessment and the Search for the Perfect Prediction Equation.

    ERIC Educational Resources Information Center

    Katch, Frank I.; Katch, Victor L.

    1980-01-01

    Sources of error in body composition assessment by laboratory and field methods can be found in hydrostatic weighing, residual air volume, skinfolds, and circumferences. Statistical analysis can and should be used in the measurement of body composition. (CJ)

  18. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  19. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  20. A comparative study of set up variations and bowel volumes in supine versus prone positions of patients treated with external beam radiation for carcinoma rectum.

    PubMed

    Rajeev, K R; Menon, Smrithy S; Beena, K; Holla, Raghavendra; Kumar, R Rajaneesh; Dinesh, M

    2014-01-01

    A prospective study was undertaken to evaluate the influence of patient positioning on the set up variations to determine the planning target volume (PTV) margins and to evaluate the clinical relevance volume assessment of the small bowel (SB) within the irradiated volume. During the period of months from December 2011 to April 2012, a computed tomography (CT) scan was done either in supine position or in prone position using a belly board (BB) for 20 consecutive patients. All the patients had histologically proven rectal cancer and received either post- or pre-operative pelvic irradiation. Using a three-dimensional planning system, the dose-volume histogram for SB was defined in each axial CT slice. Total dose was 46-50 Gy (2 Gy/fraction), delivered using the 4-field box technique. The set up variation of the study group was assessed from the data received from the electronic portal imaging device in the linear accelerator. The shift along X, Y, and Z directions were noted. Both systematic and random errors were calculated and using both these values the PTV margin was calculated. The systematic errors of patients treated in the supine position were 0.87 (X-mm), 0.66 (Y-mm), 1.6 (Z-mm) and in the prone position were 1.3 (X-mm), 0.59 (Y-mm), 1.17 (Z-mm). The random errors of patients treated in the supine positions were 1.81 (X-mm), 1.73 (Y-mm), 1.83 (Z-mm) and in prone position were 2.02 (X-mm), 1.21 (Y-mm), 3.05 (Z-mm). The calculated PTV margins in the supine position were 3.45 (X-mm), 2.87 (Y-mm), 5.31 (Z-mm) and in the prone position were 4.91 (X-mm), 2.32 (Y-mm), 5.08 (Z-mm). The mean volume of the peritoneal cavity was 648.65 cm 3 in the prone position and 1197.37 cm 3 in the supine position. The prone position using BB device was more effective in reducing irradiated SB volume in rectal cancer patients. There were no significant variations in the daily set up for patients treated in both supine and prone positions.

  1. An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.

    PubMed

    Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca

    2002-09-01

    In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.

  2. Spatiotemporal distribution of the seismicity along the Mid-Atlantic Ridge north of the Azores from hydroacoustic data: Insights into seismogenic processes in a ridge-hot spot context

    NASA Astrophysics Data System (ADS)

    Goslin, J.; Perrot, J.; Royer, J.-Y.; Martin, C.; LourençO, N.; Luis, J.; Dziak, R. P.; Matsumoto, H.; Haxel, J.; Fowler, M. J.; Fox, C. G.; Lau, A. T.-K.; Bazin, S.

    2012-02-01

    The seismicity of the North Atlantic was monitored from May 2002 to September 2003 by the `SIRENA array' of autonomous hydrophones. The hydroacoustic signals provide a unique data set documenting numerous low-magnitude earthquakes along the section of the Mid-Atlantic Ridge (MAR) located in a ridge-hot spot interaction context. During the experiment, 1696 events were detected along the MAR axis between 40°N and 51°N, with a magnitude of completeness level ofmb≈ 2.4. Inside the array, location errors are in the order of 2 km, and errors in the origin time are less than 1 s. From this catalog, 15 clusters were detected. The distribution of source level (SL) versus time within each cluster is used to discriminate clusters occurring in a tectonic context from those attributed to non-tectonic (i.e. volcanic or hydrothermal) processes. The location of tectonic and non-tectonic sequences correlates well with regions with positive and negative Mantle Bouguer Anomalies (MBAs), indicating the presence of thinner/colder and thicker/warmer crust respectively. At the scale of the entire array, both the complete and declustered catalogs derived from the hydroacoustic signals show an increase of the seismicity rate from the Azores up to 43°30'N suggesting a diminishing influence of the Azores hot spot on the ridge-axis temperature, and well correlated with a similar increase in the along-axis MBAs. The comparison of the MAR seismicity with the Residual MBA (RMBA) at different scales leads us to think that the low-magnitude seismicity rates are directly related to along-axis variations in lithosphere rheology and temperatures.

  3. Impact of spurious shear on cosmological parameter estimates from weak lensing observables

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltán; ...

    2014-12-30

    We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less

  4. Robust prediction of three-dimensional spinal curve from back surface for non-invasive follow-up of scoliosis

    NASA Astrophysics Data System (ADS)

    Bergeron, Charles; Labelle, Hubert; Ronsky, Janet; Zernicke, Ronald

    2005-04-01

    Spinal curvature progression in scoliosis patients is monitored from X-rays, and this serial exposure to harmful radiation increases the incidence of developing cancer. With the aim of reducing the invasiveness of follow-up, this study seeks to relate the three-dimensional external surface to the internal geometry, having assumed that that the physiological links between these are sufficiently regular across patients. A database was used of 194 quasi-simultaneous acquisitions of two X-rays and a 3D laser scan of the entire trunk. Data was processed to sets of datapoints representing the trunk surface and spinal curve. Functional data analyses were performed using generalized Fourier series using a Haar basis and functional minimum noise fractions. The resulting coefficients became inputs and outputs, respectively, to an array of support vector regression (SVR) machines. SVR parameters were set based on theoretical results, and cross-validation increased confidence in the system's performance. Predicted lateral and frontal views of the spinal curve from the back surface demonstrated average L2-errors of 6.13 and 4.38 millimetres, respectively, across the test set; these compared favourably with measurement error in data. This constitutes a first robust prediction of the 3D spinal curve from external data using learning techniques.

  5. Errors in radiation oncology: A study in pathways and dosimetric impact

    PubMed Central

    Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff

    2005-01-01

    As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793

  6. Optimal marker placement in hadrontherapy: intelligent optimization strategies with augmented Lagrangian pattern search.

    PubMed

    Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido

    2015-02-01

    In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.

    2016-09-01

    This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.

  9. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods.

    PubMed

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; de Azevedo Mello, Paola; Ferrão, Marco Flores; de Fátima Pereira dos Santos, Maria; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm(-1)). This model produced a RMSECV of 400 mg kg(-1) S and RMSEP of 420 mg kg(-1) S, showing a correlation coefficient of 0.990. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Neural Network Burst Pressure Prediction in Graphite/Epoxy Pressure Vessels from Acoustic Emission Amplitude Data

    NASA Technical Reports Server (NTRS)

    Hill, Eric v. K.; Walker, James L., II; Rowell, Ginger H.

    1995-01-01

    Acoustic emission (AE) data were taken during hydroproof for three sets of ASTM standard 5.75 inch diameter filament wound graphite/epoxy bottles. All three sets of bottles had the same design and were wound from the same graphite fiber; the only difference was in the epoxies used. Two of the epoxies had similar mechanical properties, and because the acoustic properties of materials are a function of their stiffnesses, it was thought that the AE data from the two sets might also be similar; however, this was not the case. Therefore, the three resin types were categorized using dummy variables, which allowed the prediction of burst pressures all three sets of bottles using a single neural network. Three bottles from each set were used to train the network. The resin category, the AE amplitude distribution data taken up to 25 % of the expected burst pressure, and the actual burst pressures were used as inputs. Architecturally, the network consisted of a forty-three neuron input layer (a single categorical variable defining the resin type plus forty-two continuous variables for the AE amplitude frequencies), a fifteen neuron hidden layer for mapping, and a single output neuron for burst pressure prediction. The network trained on all three bottle sets was able to predict burst pressures in the remaining bottles with a worst case error of + 6.59%, slightly greater than the desired goal of + 5%. This larger than desired error was due to poor resolution in the amplitude data for the third bottle set. When the third set of bottles was eliminated from consideration, only four hidden layer neurons were necessary to generate a worst case prediction error of - 3.43%, well within the desired goal.

  11. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.

  12. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  13. The Houdini Transformation: True, but Illusory.

    PubMed

    Bentler, Peter M; Molenaar, Peter C M

    2012-01-01

    Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model.

  14. The Houdini Transformation: True, but Illusory

    PubMed Central

    Bentler, Peter M.; Molenaar, Peter C. M.

    2012-01-01

    Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model. PMID:23180888

  15. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  16. To image analysis in computed tomography

    NASA Astrophysics Data System (ADS)

    Chukalina, Marina; Nikolaev, Dmitry; Ingacheva, Anastasia; Buzmakov, Alexey; Yakimchuk, Ivan; Asadchikov, Victor

    2017-03-01

    The presence of errors in tomographic image may lead to misdiagnosis when computed tomography (CT) is used in medicine, or the wrong decision about parameters of technological processes when CT is used in the industrial applications. Two main reasons produce these errors. First, the errors occur on the step corresponding to the measurement, e.g. incorrect calibration and estimation of geometric parameters of the set-up. The second reason is the nature of the tomography reconstruction step. At the stage a mathematical model to calculate the projection data is created. Applied optimization and regularization methods along with their numerical implementations of the method chosen have their own specific errors. Nowadays, a lot of research teams try to analyze these errors and construct the relations between error sources. In this paper, we do not analyze the nature of the final error, but present a new approach for the calculation of its distribution in the reconstructed volume. We hope that the visualization of the error distribution will allow experts to clarify the medical report impression or expert summary given by them after analyzing of CT results. To illustrate the efficiency of the proposed approach we present both the simulation and real data processing results.

  17. Non-parametric data-based approach for the quantification and communication of uncertainties in river flood forecasts

    NASA Astrophysics Data System (ADS)

    Van Steenbergen, N.; Willems, P.

    2012-04-01

    Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the probability of flooding of a certain area, based on the uncertainty assessment of the flood forecasts. By using this type of maps, water managers can focus their attention on the areas with the highest flood probability. Also the larger public can consult these maps for information on the probability of flooding for their specific location, such that they can take pro-active measures to reduce the personal damage. The method of quantifying the uncertainty was implemented in the operational flood forecasting system for the navigable rivers in the Flanders region of Belgium. The method has shown clear benefits during the floods of the last two years.

  18. Influence of natural organic matter on the screening of pharmaceuticals in water by using liquid chromatography with full scan mass spectrometry.

    PubMed

    Rivera, Zahira Herrera; Oosterink, Efraim; Rietveld, Luuk; Schoutsen, Frans; Stolker, Linda

    2011-08-26

    The influence of natural organic matter on the screening of pharmaceuticals in water was determined by using high resolution liquid chromatography (HRLC) combined with full scan mass spectrometry (MS) techniques like time of flight (ToF) or Orbitrap MS. Water samples containing different amount of natural organic matter (NOM) and residues of a set of 11 pharmaceuticals were analyzed by using Exactive Orbitrap™ LC-MS. The samples were screened for residues of pharmaceuticals belonging to different classes like benzimidazoles, macrolides, penicillins, quinolones, sulfonamides, tetracyclines, tranquillizers, non-steroidal anti-inflammatory drugs (NSAIDs), anti-epileptics and lipid regulators. The method characteristics were established over a concentration range of 0.1-500 μg L(-1). The 11 pharmaceuticals were added to two effluent and two influent water samples. The NOM concentration within the samples ranged from 2 to 8 mg L(-1) of dissolved organic carbon. The HRLC-Exactive Orbitrap™ LC-MS system was set at a resolution of 50,000 (FWHM) and this selection was found sufficient for the detection of the list of pharmaceuticals. With this resolution setting, accurate mass measurements with errors below 2 ppm were found, despite of the NOM concentration of the different types of water samples. The linearities were acceptable with correlation coefficients greater than 0.95 for 30 of the 51 measured linearities. The limit of detection varies between 0.1 μg L(-1)and 100 μg L(-1). It was demonstrated that sensitivity could be affected by matrix constituents in both directions of signal reduction or enhancement. Finally it was concluded that with direct shoot method used (no sample pretreatment) all compounds, were detected but LODs depend on matrix-analyte-concentration combination. No direct relation was observed between NOM concentration and method characteristics. For accurate quantification the use of internal standards and/or sample clean-up is necessary. The direct shoot method is only applicable for qualitative screening purposes. The use of full scan MS makes it possible to search for unknown contaminants. With the use of adequate software and a database containing more than 50,000 entries a tool is available to search for unknowns. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Low-cost FM oscillator for capacitance type of blade tip clearance measurement system

    NASA Technical Reports Server (NTRS)

    Barranger, John P.

    1987-01-01

    The frequency-modulated (FM) oscillator described is part of a blade tip clearance measurement system that meets the needs of a wide class of fans, compressors, and turbines. As a result of advancements in the technology of ultra-high-frequency operational amplifiers, the FM oscillator requires only a single low-cost integrated circuit. Its carrier frequency is 42.8 MHz when it is used with an integrated probe and connecting cable assembly consisting of a 0.81 cm diameter engine-mounted capacitance probe and a 61 cm long hermetically sealed coaxial cable. A complete circuit analysis is given, including amplifier negative resistance characteristics. An error analysis of environmentally induced effects is also derived, and an error-correcting technique is proposed. The oscillator can be calibrated in the static mode and has a negative peak frequency deviation of 400 kHz for a rotor blade thickness of 1.2 mm. High-temperature performance tests of the probe and 13 cm of the adjacent cable show good accuracy up to 600 C, the maximum permissible seal temperature. The major source of error is the residual FM oscillator noise, which produces a clearance error of + or - 10 microns at a clearance of 0.5 mm. The oscillator electronics accommodates the high rotor speeds associated with small engines, the signals from which may have frequency components as high as 1 MHz.

  20. The Gnomon Experiment

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    2007-12-01

    A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.

  1. Chromatic dispersive confocal technology for intra-oral scanning: first in-vitro results

    NASA Astrophysics Data System (ADS)

    Ertl, T.; Zint, M.; Konz, A.; Brauer, E.; Hörhold, H.; Hibst, R.

    2015-02-01

    Various test objects, plaster models, partially equipped with extracted teeth and pig jaws representing various clinical situations of tooth preparations were used for in-vitro scanning tests with an experimental intra-oral scanning system based on chromatic-dispersive confocal technology. Scanning results were compared against data sets of the same object captured by an industrial μCT measuring system. Compared to μCT data an average error of 18 - 30 μm was achieved for a single tooth scan area and less than 40 to 60 μm error measured over the restoration + the neighbor teeth and pontic areas up to 7 units. Mean error for a full jaw is within 100 - 140 μm. The length error for a 3 - 4 unit bridge situation form contact point to contact point is below 100 μm and excellent interproximal surface coverage and prep margin clarity was achieved.

  2. Seasonal to interannual Arctic sea ice predictability in current global climate models

    NASA Astrophysics Data System (ADS)

    Tietsche, S.; Day, J. J.; Guemas, V.; Hurlin, W. J.; Keeley, S. P. E.; Matei, D.; Msadek, R.; Collins, M.; Hawkins, E.

    2014-02-01

    We establish the first intermodel comparison of seasonal to interannual predictability of present-day Arctic climate by performing coordinated sets of idealized ensemble predictions with four state-of-the-art global climate models. For Arctic sea ice extent and volume, there is potential predictive skill for lead times of up to 3 years, and potential prediction errors have similar growth rates and magnitudes across the models. Spatial patterns of potential prediction errors differ substantially between the models, but some features are robust. Sea ice concentration errors are largest in the marginal ice zone, and in winter they are almost zero away from the ice edge. Sea ice thickness errors are amplified along the coasts of the Arctic Ocean, an effect that is dominated by sea ice advection. These results give an upper bound on the ability of current global climate models to predict important aspects of Arctic climate.

  3. Discovery of error-tolerant biclusters from noisy gene expression data.

    PubMed

    Gupta, Rohit; Rao, Navneet; Kumar, Vipin

    2011-11-24

    An important analysis performed on microarray gene-expression data is to discover biclusters, which denote groups of genes that are coherently expressed for a subset of conditions. Various biclustering algorithms have been proposed to find different types of biclusters from these real-valued gene-expression data sets. However, these algorithms suffer from several limitations such as inability to explicitly handle errors/noise in the data; difficulty in discovering small bicliusters due to their top-down approach; inability of some of the approaches to find overlapping biclusters, which is crucial as many genes participate in multiple biological processes. Association pattern mining also produce biclusters as their result and can naturally address some of these limitations. However, traditional association mining only finds exact biclusters, which limits its applicability in real-life data sets where the biclusters may be fragmented due to random noise/errors. Moreover, as they only work with binary or boolean attributes, their application on gene-expression data require transforming real-valued attributes to binary attributes, which often results in loss of information. Many past approaches have tried to address the issue of noise and handling real-valued attributes independently but there is no systematic approach that addresses both of these issues together. In this paper, we first propose a novel error-tolerant biclustering model, 'ET-bicluster', and then propose a bottom-up heuristic-based mining algorithm to sequentially discover error-tolerant biclusters directly from real-valued gene-expression data. The efficacy of our proposed approach is illustrated by comparing it with a recent approach RAP in the context of two biological problems: discovery of functional modules and discovery of biomarkers. For the first problem, two real-valued S.Cerevisiae microarray gene-expression data sets are used to demonstrate that the biclusters obtained from ET-bicluster approach not only recover larger set of genes as compared to those obtained from RAP approach but also have higher functional coherence as evaluated using the GO-based functional enrichment analysis. The statistical significance of the discovered error-tolerant biclusters as estimated by using two randomization tests, reveal that they are indeed biologically meaningful and statistically significant. For the second problem of biomarker discovery, we used four real-valued Breast Cancer microarray gene-expression data sets and evaluate the biomarkers obtained using MSigDB gene sets. The results obtained for both the problems: functional module discovery and biomarkers discovery, clearly signifies the usefulness of the proposed ET-bicluster approach and illustrate the importance of explicitly incorporating noise/errors in discovering coherent groups of genes from gene-expression data.

  4. Relative Proportion Of Different Types Of Refractive Errors In Subjects Seeking Laser Vision Correction

    PubMed Central

    Althomali, Talal A.

    2018-01-01

    Background: Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. Purpose: To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. Methods: The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Outcome Measures: Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. Results: The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Conclusion and Relevance: Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes. PMID:29872484

  5. Development of a sub-miniature rubidium oscillator for SEEKTALK application

    NASA Technical Reports Server (NTRS)

    Fruehauf, H.; Weidemann, W.; Jechart, E.

    1981-01-01

    Warm-up and size challenges to oscillator construction are presented as well as the problems involved in these tasks. The performance of M-100 military rubidium oscillator is compared to that of a subminiture rubididum oscillator (M-1000). Methods of achieving 1.5 minute warm-up are discussed as well as improvements in performance under adverse environmental conditions, including temperature, vibration, and magnetics. An attempt is made to construct an oscillator error budget under a set of arbitrary mission conditions.

  6. Local Setup Reproducibility of the Spinal Column When Using Intensity-Modulated Radiation Therapy for Craniospinal Irradiation With Patient in Supine Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoiber, Eva Maria, E-mail: eva.stoiber@med.uni-heidelberg.de; Department of Medical Physics, German Cancer Research Center, Heidelberg; Giske, Kristina

    Purpose: To evaluate local positioning errors of the lumbar spine during fractionated intensity-modulated radiotherapy of patients treated with craniospinal irradiation and to assess the impact of rotational error correction on these uncertainties for one patient setup correction strategy. Methods and Materials: 8 patients (6 adults, 2 children) treated with helical tomotherapy for craniospinal irradiation were retrospectively chosen for this analysis. Patients were immobilized with a deep-drawn Aquaplast head mask. Additionally to daily megavoltage control computed tomography scans of the skull, once-a-week positioning of the lumbar spine was assessed. Therefore, patient setup was corrected by a target point correction, derived frommore » a registration of the patient's skull. The residual positioning variations of the lumbar spine were evaluated applying a rigid-registration algorithm. The impact of different rotational error corrections was simulated. Results: After target point correction, residual local positioning errors of the lumbar spine varied considerably. Craniocaudal axis rotational error correction did not improve or deteriorate these translational errors, whereas simulation of a rotational error correction of the right-left and anterior-posterior axis increased these errors by a factor of 2 to 3. Conclusion: The patient fixation used allows for deformations between the patient's skull and spine. Therefore, for the setup correction strategy evaluated in this study, generous margins for the lumbar spinal target volume are needed to prevent a local geographic miss. With any applied correction strategy, it needs to be evaluated whether or not a rotational error correction is beneficial.« less

  7. Multi-rate cubature Kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system.

    PubMed

    Guo, Xiaoting; Sun, Changku; Wang, Peng

    2017-08-01

    This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.

  8. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    NASA Astrophysics Data System (ADS)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.

  9. Deformation Time-Series of the Lost-Hills Oil Field using a Multi-Baseline Interferometric SAR Inversion Algorithm with Finite Difference Smoothing Constraints

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.

    2012-12-01

    The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.

  10. Continuous Glucose Monitoring in Newborn Infants

    PubMed Central

    Thomas, Felicity; Signal, Mathew; Harris, Deborah L.; Weston, Philip J.; Harding, Jane E.; Shaw, Geoffrey M.

    2014-01-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. PMID:24876618

  11. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  12. Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.

    PubMed

    Patel, Santosh; Loveridge, Robert

    2015-12-01

    Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.

  13. Contribution to the Prediction of the Fold Code: Application to Immunoglobulin and Flavodoxin Cases

    PubMed Central

    Banach, Mateusz; Prudhomme, Nicolas; Carpentier, Mathilde; Duprat, Elodie; Papandreou, Nikolaos; Kalinowska, Barbara; Chomilier, Jacques; Roterman, Irena

    2015-01-01

    Background Folding nucleus of globular proteins formation starts by the mutual interaction of a group of hydrophobic amino acids whose close contacts allow subsequent formation and stability of the 3D structure. These early steps can be predicted by simulation of the folding process through a Monte Carlo (MC) coarse grain model in a discrete space. We previously defined MIRs (Most Interacting Residues), as the set of residues presenting a large number of non-covalent neighbour interactions during such simulation. MIRs are good candidates to define the minimal number of residues giving rise to a given fold instead of another one, although their proportion is rather high, typically [15-20]% of the sequences. Having in mind experiments with two sequences of very high levels of sequence identity (up to 90%) but different folds, we combined the MIR method, which takes sequence as single input, with the “fuzzy oil drop” (FOD) model that requires a 3D structure, in order to estimate the residues coding for the fold. FOD assumes that a globular protein follows an idealised 3D Gaussian distribution of hydrophobicity density, with the maximum in the centre and minima at the surface of the “drop”. If the actual local density of hydrophobicity around a given amino acid is as high as the ideal one, then this amino acid is assigned to the core of the globular protein, and it is assumed to follow the FOD model. Therefore one obtains a distribution of the amino acids of a protein according to their agreement or rejection with the FOD model. Results We compared and combined MIR and FOD methods to define the minimal nucleus, or keystone, of two populated folds: immunoglobulin-like (Ig) and flavodoxins (Flav). The combination of these two approaches defines some positions both predicted as a MIR and assigned as accordant with the FOD model. It is shown here that for these two folds, the intersection of the predicted sets of residues significantly differs from random selection. It reduces the number of selected residues by each individual method and allows a reasonable agreement with experimentally determined key residues coding for the particular fold. In addition, the intersection of the two methods significantly increases the specificity of the prediction, providing a robust set of residues that constitute the folding nucleus. PMID:25915049

  14. Evaluation of ship-based sediment flux measurements by ADCPs in tidal flows

    NASA Astrophysics Data System (ADS)

    Becker, Marius; Maushake, Christian; Grünler, Steffen; Winter, Christian

    2017-04-01

    In the past decades acoustic backscatter calibration developed into a frequently applied technique to measure fluxes of suspended sediments in rivers and estuaries. Data is mainly acquired using single-frequency profiling devices, such as ADCPs. In this case, variations of acoustic particle properties may have a significant impact on the calibration with respect to suspended sediment concentration, but associated effects are rarely considered. Further challenges regarding flux determination arise from incomplete vertical and lateral coverage of the cross-section, and the small ratio of the residual transport to the tidal transport, depending on the tidal prism. We analyzed four sets of 13h cross-sectional ADCP data, collected at different locations in the range of the turbidity zone of the Weser estuary, North Sea, Germany. Vertical LISST, OBS and CTD measurements were taken very hour. During the calibration sediment absorption was taken into account. First, acoustic properties were estimated using LISST particle size distributions. Due to the tidal excursion and displacement of the turbidity zone, acoustic properties of particles changed during the tidal cycle, at all locations. Applying empirical functions, the lowest backscattering cross-section and highest sediment absorption coefficient were found in the center of the turbidity zone. Outside the tidally averaged location of the turbidity zone, changes of acoustic parameters were caused mainly by advection. In the turbidity zone, these properties were also affected by settling and entrainment, inducing vertical differences and systematic errors in concentration. In general, due to the iterative correction of sediment absorption along the acoustic path, local errors in concentration propagate and amplify exponentially. Based on reference concentration obtained from water samples and OBS data, we quantified these errors and their effect on cross-sectional averaged concentration and sediment flux. We found that errors are effectively decreased by applying calibration parameters interpolated in time, and by an optimization of the sediment absorption coefficient. We further discuss practical aspects of residual flux determination in tidal environments and of measuring strategies in relation to site-specific tidal dynamics.

  15. Investigation of Diesel’s Residual Noise on Predictive Vehicles Noise Cancelling using LMS Adaptive Algorithm

    NASA Astrophysics Data System (ADS)

    Arttini Dwi Prasetyowati, Sri; Susanto, Adhi; Widihastuti, Ida

    2017-04-01

    Every noise problems require different solution. In this research, the noise that must be cancelled comes from roadway. Least Mean Square (LMS) adaptive is one of the algorithm that can be used to cancel that noise. Residual noise always appears and could not be erased completely. This research aims to know the characteristic of residual noise from vehicle’s noise and analysis so that it is no longer appearing as a problem. LMS algorithm was used to predict the vehicle’s noise and minimize the error. The distribution of the residual noise could be observed to determine the specificity of the residual noise. The statistic of the residual noise close to normal distribution with = 0,0435, = 1,13 and the autocorrelation of the residual noise forming impulse. As a conclusion the residual noise is insignificant.

  16. An Astronomical Test of CCD Photometric Precision

    NASA Technical Reports Server (NTRS)

    Koch, David; Dunham, Edward; Borucki, William; Jenkins, Jon; DeVingenzi, D. (Technical Monitor)

    1998-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques. we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  17. Continuous glucose monitoring in newborn infants: how do errors in calibration measurements affect detected hypoglycemia?

    PubMed

    Thomas, Felicity; Signal, Mathew; Harris, Deborah L; Weston, Philip J; Harding, Jane E; Shaw, Geoffrey M; Chase, J Geoffrey

    2014-05-01

    Neonatal hypoglycemia is common and can cause serious brain injury. Continuous glucose monitoring (CGM) could improve hypoglycemia detection, while reducing blood glucose (BG) measurements. Calibration algorithms use BG measurements to convert sensor signals into CGM data. Thus, inaccuracies in calibration BG measurements directly affect CGM values and any metrics calculated from them. The aim was to quantify the effect of timing delays and calibration BG measurement errors on hypoglycemia metrics in newborn infants. Data from 155 babies were used. Two timing and 3 BG meter error models (Abbott Optium Xceed, Roche Accu-Chek Inform II, Nova Statstrip) were created using empirical data. Monte-Carlo methods were employed, and each simulation was run 1000 times. Each set of patient data in each simulation had randomly selected timing and/or measurement error added to BG measurements before CGM data were calibrated. The number of hypoglycemic events, duration of hypoglycemia, and hypoglycemic index were then calculated using the CGM data and compared to baseline values. Timing error alone had little effect on hypoglycemia metrics, but measurement error caused substantial variation. Abbott results underreported the number of hypoglycemic events by up to 8 and Roche overreported by up to 4 where the original number reported was 2. Nova results were closest to baseline. Similar trends were observed in the other hypoglycemia metrics. Errors in blood glucose concentration measurements used for calibration of CGM devices can have a clinically important impact on detection of hypoglycemia. If CGM devices are going to be used for assessing hypoglycemia it is important to understand of the impact of these errors on CGM data. © 2014 Diabetes Technology Society.

  18. Multistrip western blotting to increase quantitative data output.

    PubMed

    Kiyatkin, Anatoly; Aksamitiene, Edita

    2009-01-01

    The qualitative and quantitative measurements of protein abundance and modification states are essential in understanding their functions in diverse cellular processes. Typical western blotting, though sensitive, is prone to produce substantial errors and is not readily adapted to high-throughput technologies. Multistrip western blotting is a modified immunoblotting procedure based on simultaneous electrophoretic transfer of proteins from multiple strips of polyacrylamide gels to a single membrane sheet. In comparison with the conventional technique, Multistrip western blotting increases the data output per single blotting cycle up to tenfold, allows concurrent monitoring of up to nine different proteins from the same loading of the sample, and substantially improves the data accuracy by reducing immunoblotting-derived signal errors. This approach enables statistically reliable comparison of different or repeated sets of data, and therefore is beneficial to apply in biomedical diagnostics, systems biology, and cell signaling research.

  19. VizieR Online Data Catalog: R136 JKs photometry from VLT/SPHERE EAO (Khorrami+, 2017)

    NASA Astrophysics Data System (ADS)

    Khorrami, Z.; Vakili, F.; Lanz, T.; Langlois, M.; Lagadec, E.; Meyer, M. R.; Robbe-Dubois, S.; Abe, L.; Avenhaus, H.; Beuzit, J. L.; Gratton, R.; Mouillet, D.; Origne, A.; Petit, C.; Ramos, J.

    2017-03-01

    The SPHERE/IRDIS catalog of the common sources between J and Ks-band data on R136. The ID, Xpix and Ypix are the identification and pixel position in the IRDIS K and J image. σK and σJ are the total error (combination of PSF-fitting error, residual errors and the calibration error) in Ks and J images. CK and CJ are the Correlation coefficients between the input PSF and the star, in Ks and J data. (1 data file).

  20. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  1. Emulating RRTMG Radiation with Deep Neural Networks for the Accelerated Model for Climate and Energy

    NASA Astrophysics Data System (ADS)

    Pal, A.; Norman, M. R.

    2017-12-01

    The RRTMG radiation scheme in the Accelerated Model for Climate and Energy Multi-scale Model Framework (ACME-MMF), is a bottleneck and consumes approximately 50% of the computational time. To simulate a case using RRTMG radiation scheme in ACME-MMF with high throughput and high resolution will therefore require a speed-up of this calculation while retaining physical fidelity. In this study, RRTMG radiation is emulated with Deep Neural Networks (DNNs). The first step towards this goal is to run a case with ACME-MMF and generate input data sets for the DNNs. A principal component analysis of these input data sets are carried out. Artificial data sets are created using the previous data sets to cover a wider space. These artificial data sets are used in a standalone RRTMG radiation scheme to generate outputs in a cost effective manner. These input-output pairs are used to train multiple architectures DNNs(1). Another DNN(2) is trained using the inputs to predict the error. A reverse emulation is trained to map the output to input. An error controlled code is developed with the two DNNs (1 and 2) and will determine when/if the original parameterization needs to be used.

  2. Influence of secondary neutrons induced by proton radiotherapy for cancer patients with implantable cardioverter defibrillators

    PubMed Central

    2012-01-01

    Background Although proton radiotherapy is a promising new approach for cancer patients, functional interference is a concern for patients with implantable cardioverter defibrillators (ICDs). The purpose of this study was to clarify the influence of secondary neutrons induced by proton radiotherapy on ICDs. Methods The experimental set-up simulated proton radiotherapy for a patient with an ICD. Four new ICDs were placed 0.3 cm laterally and 3 cm distally outside the radiation field in order to evaluate the influence of secondary neutrons. The cumulative in-field radiation dose was 107 Gy over 10 sessions of irradiation with a dose rate of 2 Gy/min and a field size of 10 × 10 cm2. After each radiation fraction, interference with the ICD by the therapy was analyzed by an ICD programmer. The dose distributions of secondary neutrons were estimated by Monte-Carlo simulation. Results The frequency of the power-on reset, the most serious soft error where the programmed pacing mode changes temporarily to a safety back-up mode, was 1 per approximately 50 Gy. The total number of soft errors logged in all devices was 29, which was a rate of 1 soft error per approximately 15 Gy. No permanent device malfunctions were detected. The calculated dose of secondary neutrons per 1 Gy proton dose in the phantom was approximately 1.3-8.9 mSv/Gy. Conclusions With the present experimental settings, the probability was approximately 1 power-on reset per 50 Gy, which was below the dose level (60-80 Gy) generally used in proton radiotherapy. Further quantitative analysis in various settings is needed to establish guidelines regarding proton radiotherapy for cancer patients with ICDs. PMID:22284700

  3. DockRank: Ranking docked conformations using partner-specific sequence homology-based protein interface prediction

    PubMed Central

    Xue, Li C.; Jordan, Rafael A.; EL-Manzalawy, Yasser; Dobbs, Drena; Honavar, Vasant

    2015-01-01

    Selecting near-native conformations from the immense number of conformations generated by docking programs remains a major challenge in molecular docking. We introduce DockRank, a novel approach to scoring docked conformations based on the degree to which the interface residues of the docked conformation match a set of predicted interface residues. Dock-Rank uses interface residues predicted by partner-specific sequence homology-based protein–protein interface predictor (PS-HomPPI), which predicts the interface residues of a query protein with a specific interaction partner. We compared the performance of DockRank with several state-of-the-art docking scoring functions using Success Rate (the percentage of cases that have at least one near-native conformation among the top m conformations) and Hit Rate (the percentage of near-native conformations that are included among the top m conformations). In cases where it is possible to obtain partner-specific (PS) interface predictions from PS-HomPPI, DockRank consistently outperforms both (i) ZRank and IRAD, two state-of-the-art energy-based scoring functions (improving Success Rate by up to 4-fold); and (ii) Variants of DockRank that use predicted interface residues obtained from several protein interface predictors that do not take into account the binding partner in making interface predictions (improving success rate by up to 39-fold). The latter result underscores the importance of using partner-specific interface residues in scoring docked conformations. We show that DockRank, when used to re-rank the conformations returned by ClusPro, improves upon the original ClusPro rankings in terms of both Success Rate and Hit Rate. DockRank is available as a server at http://einstein.cs.iastate.edu/DockRank/. PMID:23873600

  4. DockRank: ranking docked conformations using partner-specific sequence homology-based protein interface prediction.

    PubMed

    Xue, Li C; Jordan, Rafael A; El-Manzalawy, Yasser; Dobbs, Drena; Honavar, Vasant

    2014-02-01

    Selecting near-native conformations from the immense number of conformations generated by docking programs remains a major challenge in molecular docking. We introduce DockRank, a novel approach to scoring docked conformations based on the degree to which the interface residues of the docked conformation match a set of predicted interface residues. DockRank uses interface residues predicted by partner-specific sequence homology-based protein-protein interface predictor (PS-HomPPI), which predicts the interface residues of a query protein with a specific interaction partner. We compared the performance of DockRank with several state-of-the-art docking scoring functions using Success Rate (the percentage of cases that have at least one near-native conformation among the top m conformations) and Hit Rate (the percentage of near-native conformations that are included among the top m conformations). In cases where it is possible to obtain partner-specific (PS) interface predictions from PS-HomPPI, DockRank consistently outperforms both (i) ZRank and IRAD, two state-of-the-art energy-based scoring functions (improving Success Rate by up to 4-fold); and (ii) Variants of DockRank that use predicted interface residues obtained from several protein interface predictors that do not take into account the binding partner in making interface predictions (improving success rate by up to 39-fold). The latter result underscores the importance of using partner-specific interface residues in scoring docked conformations. We show that DockRank, when used to re-rank the conformations returned by ClusPro, improves upon the original ClusPro rankings in terms of both Success Rate and Hit Rate. DockRank is available as a server at http://einstein.cs.iastate.edu/DockRank/. Copyright © 2013 Wiley Periodicals, Inc.

  5. Coal gasification system with a modulated on/off control system

    DOEpatents

    Fasching, George E.

    1984-01-01

    A modulated control system is provided for improving regulation of the bed level in a fixed-bed coal gasifier into which coal is fed from a rotary coal feeder. A nuclear bed level gauge using a cobalt source and an ion chamber detector is used to detect the coal bed level in the gasifier. The detector signal is compared to a bed level set point signal in a primary controller which operates in proportional/integral modes to produce an error signal. The error signal is modulated by the injection of a triangular wave signal of a frequency of about 0.0004 Hz and an amplitude of about 80% of the primary deadband. The modulated error signal is fed to a triple-deadband secondary controller which jogs the coal feeder speed up or down by on/off control of a feeder speed change driver such that the gasifier bed level is driven toward the set point while preventing excessive cycling (oscillation) common in on/off mode automatic controllers of this type. Regulation of the bed level is achieved without excessive feeder speed control jogging.

  6. Estimating the intrinsic limit of the Feller-Peterson-Dixon composite approach when applied to adiabatic ionization potentials in atoms and small molecules

    NASA Astrophysics Data System (ADS)

    Feller, David

    2017-07-01

    Benchmark adiabatic ionization potentials were obtained with the Feller-Peterson-Dixon (FPD) theoretical method for a collection of 48 atoms and small molecules. In previous studies, the FPD method demonstrated an ability to predict atomization energies (heats of formation) and electron affinities well within a 95% confidence level of ±1 kcal/mol. Large 1-particle expansions involving correlation consistent basis sets (up to aug-cc-pV8Z in many cases and aug-cc-pV9Z for some atoms) were chosen for the valence CCSD(T) starting point calculations. Despite their cost, these large basis sets were chosen in order to help minimize the residual basis set truncation error and reduce dependence on approximate basis set limit extrapolation formulas. The complementary n-particle expansion included higher order CCSDT, CCSDTQ, or CCSDTQ5 (coupled cluster theory with iterative triple, quadruple, and quintuple excitations) corrections. For all of the chemical systems examined here, it was also possible to either perform explicit full configuration interaction (CI) calculations or to otherwise estimate the full CI limit. Additionally, corrections associated with core/valence correlation, scalar relativity, anharmonic zero point vibrational energies, non-adiabatic effects, and other minor factors were considered. The root mean square deviation with respect to experiment for the ionization potentials was 0.21 kcal/mol (0.009 eV). The corresponding level of agreement for molecular enthalpies of formation was 0.37 kcal/mol and for electron affinities 0.20 kcal/mol. Similar good agreement with experiment was found in the case of molecular structures and harmonic frequencies. Overall, the combination of energetic, structural, and vibrational data (655 comparisons) reflects the consistent ability of the FPD method to achieve close agreement with experiment for small molecules using the level of theory applied in this study.

  7. Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA

    NASA Astrophysics Data System (ADS)

    Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz

    2018-04-01

    External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.

  8. Phytoremediation of palm oil mill secondary effluent (POMSE) by Chrysopogon zizanioides (L.) using artificial neural networks.

    PubMed

    Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin

    2017-05-04

    Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.

  9. Asteroid orbit fitting with radar and angular observations

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.

    2013-12-01

    The asteroid orbit fitting problem using their radar and angular observations has been considered. The problem was solved in a standanrd way by means of minimization of weighted sum of squares of residuals. In the orbit fitting both kinds of radar observa-tions have been used: the observations of time delays and of Doppler frequency shifts. The weight for angular observations has been set the same for all of them and has been determined as inverse mean-square residual obtained in the orbit fitting using just angular observations. The weights of radar observations have been set as inverse squared errors of these observations published together with them in the Minor Planet Center electronical circulars (MPECs). For the orbit fitting some five asteroids have been taken from these circulars. The asteroids have been chosen fulfilling the requirement of more than six radar observations of them to be available. The asteroids are 1950 DA, 1999 RQ36, 2002 NY40, 2004 DC and 2005 EU2. Several orbit fittings for these aster-oids have been done: with just angular observations; with just radar observations; with both angular and radar observations. The obtained results are quite acceptable because in the last case the mean-square angular residuals are approximately equal to the same ones obtained in the fitting with just angular observations. As to radar observations mean-square residuals, the time delay residuals for three asteroids do not exceed 1 μs, for two others ˜ 10 μs and the Doppler shift residuals for three asteroids do not exceed 1 Hz, for two others ˜ 10 Hz. The motion equations included perturbations from 9 planets and the Moon using their ephemerides DE422. The numerical integration has been performed with Everhart 27-order method with variable step. All calculations have been exe-cuted to a 34-digit decimal precision (i.e. using 128-bit floating-point numbers). Further, the sizes of confidence ellipsoids of im-proved orbit parameters have been compared. It has been accepted that an indicator of ellipsoid size is a geometric mean of its six semi-axes. A comparison of sizes has shown that confidence ellipsoids obtained in orbit fitting with both angular and radar obser-vations are several times less than ellipsoids obtained with just angular observations.

  10. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    PubMed Central

    Su, Zhong; Liu, Ning; Li, Qing

    2015-01-01

    A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593

  11. Removal of batch effects using distribution-matching residual networks.

    PubMed

    Shaham, Uri; Stanton, Kelly P; Zhao, Jun; Li, Huamin; Raddassi, Khadir; Montgomery, Ruth; Kluger, Yuval

    2017-08-15

    Sources of variability in experimentally derived data include measurement error in addition to the physical phenomena of interest. This measurement error is a combination of systematic components, originating from the measuring instrument and random measurement errors. Several novel biological technologies, such as mass cytometry and single-cell RNA-seq (scRNA-seq), are plagued with systematic errors that may severely affect statistical analysis if the data are not properly calibrated. We propose a novel deep learning approach for removing systematic batch effects. Our method is based on a residual neural network, trained to minimize the Maximum Mean Discrepancy between the multivariate distributions of two replicates, measured in different batches. We apply our method to mass cytometry and scRNA-seq datasets, and demonstrate that it effectively attenuates batch effects. our codes and data are publicly available at https://github.com/ushaham/BatchEffectRemoval.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Landing Technique and Performance in Youth Athletes After a Single Injury-Prevention Program Session

    PubMed Central

    Root, Hayley; Trojian, Thomas; Martinez, Jessica; Kraemer, William; DiStefano, Lindsay J.

    2015-01-01

    Context Injury-prevention programs (IPPs) performed as season-long warm-ups improve injury rates, performance outcomes, and jump-landing technique. However, concerns regarding program adoption exist. Identifying the acute benefits of using an IPP compared with other warm-ups may encourage IPP adoption. Objective To examine the immediate effects of 3 warm-up protocols (IPP, static warm-up [SWU], or dynamic warm-up [DWU]) on jump-landing technique and performance measures in youth athletes. Design Randomized controlled clinical trial. Setting Gymnasiums. Patients or Other Participants Sixty male and 29 female athletes (age = 13 ± 2 years, height = 162.8 ± 12.6 cm, mass = 37.1 ± 13.5 kg) volunteered to participate in a single session. Intervention(s) Participants were stratified by age, sex, and sport and then were randomized into 1 protocol: IPP, SWU, or DWU. The IPP consisted of dynamic flexibility, strengthening, plyometric, and balance exercises and emphasized proper technique. The SWU consisted of jogging and lower extremity static stretching. The DWU consisted of dynamic lower extremity flexibility exercises. Participants were assessed for landing technique and performance measures immediately before (PRE) and after (POST) completing their warm-ups. Main Outcome Measure(s) One rater graded each jump-landing trial using the Landing Error Scoring System. Participants performed a vertical jump, long jump, shuttle run, and jump-landing task in randomized order. The averages of all jump-landing trials and performance variables were used to calculate 1 composite score for each variable at PRE and POST. Change scores were calculated (POST − PRE) for all measures. Separate 1-way (group) analyses of variance were conducted for each dependent variable (α < .05). Results No differences were observed among groups for any performance measures (P > .05). The Landing Error Scoring System scores improved after the IPP (change = −0.40 ± 1.24 errors) compared with the DWU (0.27 ± 1.09 errors) and SWU (0.43 ± 1.35 errors; P = .04). Conclusions An IPP did not impair sport performance and may have reduced injury risk, which supports the use of these programs before sport activity. PMID:26523663

  13. SU-E-J-172: Bio-Physical Effects of Patients Set-Up Errors According to Whole Breast Irradiation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Suh, T; Park, S

    2015-06-15

    Purpose: The dose-related effects of patient setup errors on biophysical indices were evaluated for conventional wedge (CW) and field-in-field (FIF) whole breast irradiation techniques. Methods: The treatment plans for 10 patients receiving whole left breast irradiation were retrospectively selected. Radiobiological and physical effects caused by dose variations were evaluated by shifting the isocenters and gantry angles of the treatment plans. Dose-volume histograms of the planning target volume (PTV), heart, and lungs were generated, and conformity index (CI), homogeneity index (HI), tumor control probability (TCP), and normal tissue complication probability (NTCP) were determined. Results: For “isocenter shift plan” with posterior direction,more » the D95 of the PTV decreased by approximately 15% and the TCP of the PTV decreased by approximately 50% for the FIF technique and by 40% for the CW; however, the NTCPs of the lungs and heart increased by about 13% and 1%, respectively, for both techniques. Increasing the gantry angle decreased the TCPs of the PTV by 24.4% (CW) and by 34% (FIF). The NTCPs for the two techniques differed by only 3%. In case of CW, the CIs and HIs were much higher than that of the FIF in all cases. It had a significant difference between two techniques (p<0.01). According to our results, however, the FIF had more sensitive response by set up errors rather than CW in bio-physical aspects. Conclusions: The radiobiological-based analysis can detect significant dosimetric errors then, can provide a practical patient quality assurance method to guide the radiobiological and physical effects.« less

  14. Modulating RNA Alignment Using Directional Dynamic Kinks: Application in Determining an Atomic-Resolution Ensemble for a Hairpin using NMR Residual Dipolar Couplings.

    PubMed

    Salmon, Loïc; Giambaşu, George M; Nikolova, Evgenia N; Petzold, Katja; Bhattacharya, Akash; Case, David A; Al-Hashimi, Hashim M

    2015-10-14

    Approaches that combine experimental data and computational molecular dynamics (MD) to determine atomic resolution ensembles of biomolecules require the measurement of abundant experimental data. NMR residual dipolar couplings (RDCs) carry rich dynamics information, however, difficulties in modulating overall alignment of nucleic acids have limited the ability to fully extract this information. We present a strategy for modulating RNA alignment that is based on introducing variable dynamic kinks in terminal helices. With this strategy, we measured seven sets of RDCs in a cUUCGg apical loop and used this rich data set to test the accuracy of an 0.8 μs MD simulation computed using the Amber ff10 force field as well as to determine an atomic resolution ensemble. The MD-generated ensemble quantitatively reproduces the measured RDCs, but selection of a sub-ensemble was required to satisfy the RDCs within error. The largest discrepancies between the RDC-selected and MD-generated ensembles are observed for the most flexible loop residues and backbone angles connecting the loop to the helix, with the RDC-selected ensemble resulting in more uniform dynamics. Comparison of the RDC-selected ensemble with NMR spin relaxation data suggests that the dynamics occurs on the ps-ns time scales as verified by measurements of R(1ρ) relaxation-dispersion data. The RDC-satisfying ensemble samples many conformations adopted by the hairpin in crystal structures indicating that intrinsic plasticity may play important roles in conformational adaptation. The approach presented here can be applied to test nucleic acid force fields and to characterize dynamics in diverse RNA motifs at atomic resolution.

  15. Effects of residual hearing on cochlear implant outcomes in children: A systematic-review.

    PubMed

    Chiossi, Julia Santos Costa; Hyppolito, Miguel Angelo

    2017-09-01

    to investigate if preoperative residual hearing in prelingually deafened children can interfere on cochlear implant indication and outcomes. a systematic-review was conducted in five international databases up to November-2016, to locate articles that evaluated cochlear implantation in children with some degree of preoperative residual hearing. Outcomes were auditory, language and cognition performances after cochlear implant. The quality of the studies was assessed and classified according to the Oxford Levels of Evidence table - 2011. Risk of biases were also described. From the 30 articles reviewed, two types of questions were identified: (a) what are the benefits of cochlear implantation in children with residual hearing? (b) is the preoperative residual hearing a predictor of cochlear implant outcome? Studies ranged from 04 to 188 subjects, evaluating populations between 1.8 and 10.3 years old. The definition of residual hearing varied between studies. The majority of articles (n = 22) evaluated speech perception as the outcome and 14 also assessed language and speech production. There is evidence that cochlear implant is beneficial to children with residual hearing. Preoperative residual hearing seems to be valuable to predict speech perception outcomes after cochlear implantation, even though the mechanism of how it happens is not clear. More extensive researches must be conducted in order to make recommendations and to set prognosis for cochlear implants based on children preoperative residual hearing. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Residual gas analysis for long-pulse, advanced tokamak operation.

    PubMed

    Klepper, C C; Hillis, D L; Bucalossi, J; Douai, D; Oddon, P; Vartanian, S; Colas, L; Manenc, L; Pégourié, B

    2010-10-01

    A shielded residual gas analyzer (RGA) system on Tore Supra can function during plasma operation and is set up to monitor the composition of the neutral gas in one of the pumping ducts of the toroidal pumped limited. This "diagnostic RGA" has been used in long-pulse (up to 6 min) discharges for continuous monitoring of up to 15 masses simultaneously. Comparison of the RGA-measured evolution of the H(2)/D(2) isotopic ratio in the exhaust gas to that measured by an energetic neutral particle analyzer in the plasma core provides a way to monitor the evolution of particle balance. RGA monitoring of corrective H(2) injection to maintain proper minority heating is providing a database for improved ion cyclotron resonance heating, potentially with RGA-base feedback control. In very long pulses (>4 min) absence of significant changes in the RGA-monitored, hydrocarbon particle pressures is an indication of proper operation of the actively cooled, carbon-based plasma facing components. Also H(2) could increase due to thermodesorption of overheated plasma facing components.

  17. X-ray diffraction analysis of residual stresses in textured ZnO thin films

    NASA Astrophysics Data System (ADS)

    Dobročka, E.; Novák, P.; Búc, D.; Harmatha, L.; Murín, J.

    2017-02-01

    Residual stresses are commonly generated in thin films during the deposition process and can influence the film properties. Among a number of techniques developed for stress analysis, X-ray diffraction methods, especially the grazing incidence set-up, are of special importance due to their capability to analyze the stresses in very thin layers as well as to investigate the depth variation of the stresses. In this contribution a method combining multiple {hkl} and multiple χ modes of X-ray diffraction stress analysis in grazing incidence set-up is used for the measurement of residual stress in strongly textured ZnO thin films. The method improves the precision of the stress evaluation in textured samples. Because the measurements are performed at very low incidence angles, the effect of refraction of X-rays on the measured stress is analyzed in details for the general case of non-coplanar geometry. It is shown that this effect cannot be neglected if the angle of incidence approaches the critical angle. The X-ray stress factors are calculated for hexagonal fiber-textured ZnO for the Reuss model of grain-interaction and the effect of texture on the stress factors is analyzed. The texture in the layer is modelled by Gaussian distribution function. Numerical results indicate that in the process of stress evaluation the Reuss model can be replaced by much simpler crystallite group method if the standard deviation of Gaussian describing the texture is less than 6°. The results can be adapted for fiber-textured films of various hexagonal materials.

  18. Estimation of maximum tolerated dose for long-term bioassays from acute lethal dose and structure by QSAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gombar, V.K.; Enslein, K.; Hart, J.B.

    1991-09-01

    A quantitative structure-activity relationship (QSAR) model has been developed to estimate maximum tolerated doses (MTD) from structural features of chemicals and the corresponding oral acute lethal doses (LD50) as determined in male rats. The model is based on a set of 269 diverse chemicals which have been tested under the National Cancer Institute/National Toxicology Program (NCI/NTP) protocols. The rat oral LD50 value was the strongest predictor. Additionally, 22 structural descriptors comprising nine substructural MOLSTAC(c) keys, three molecular connectivity indices, and sigma charges on 10 molecular fragments were identified as endpoint predictors. The model explains 76% of the variance and ismore » significant (F = 35.7) at p less than 0.0001 with a standard error of the estimate of 0.40 in the log (1/mol) units used in Hansch-type equations. Cross-validation showed that the difference between the average deleted residual square (0.179) and the model residual square (0.160) was not significant (t = 0.98).« less

  19. Improved model for correcting the ionospheric impact on bending angle in radio occultation measurements

    NASA Astrophysics Data System (ADS)

    Angling, Matthew J.; Elvidge, Sean; Healy, Sean B.

    2018-04-01

    The standard approach to remove the effects of the ionosphere from neutral atmosphere GPS radio occultation measurements is to estimate a corrected bending angle from a combination of the L1 and L2 bending angles. This approach is known to result in systematic errors and an extension has been proposed to the standard ionospheric correction that is dependent on the squared L1 / L2 bending angle difference and a scaling term (κ). The variation of κ with height, time, season, location and solar activity (i.e. the F10.7 flux) has been investigated by applying a 1-D bending angle operator to electron density profiles provided by a monthly median ionospheric climatology model. As expected, the residual bending angle is well correlated (negatively) with the vertical total electron content (TEC). κ is more strongly dependent on the solar zenith angle, indicating that the TEC-dependent component of the residual error is effectively modelled by the squared L1 / L2 bending angle difference term in the correction. The residual error from the ionospheric correction is likely to be a major contributor to the overall error budget of neutral atmosphere retrievals between 40 and 80 km. Over this height range κ is approximately linear with height. A simple κ model has also been developed. It is independent of ionospheric measurements, but incorporates geophysical dependencies (i.e. solar zenith angle, solar flux, altitude). The global mean error (i.e. bias) and the standard deviation of the residual errors are reduced from -1.3×10-8 and 2.2×10-8 for the uncorrected case to -2.2×10-10 rad and 2.0×10-9 rad, respectively, for the corrections using the κ model. Although a fixed scalar κ also reduces bias for the global average, the selected value of κ (14 rad-1) is only appropriate for a small band of locations around the solar terminator. In the daytime, the scalar κ is consistently too high and this results in an overcorrection of the bending angles and a positive bending angle bias. Similarly, in the nighttime, the scalar κ is too low. However, in this case, the bending angles are already small and the impact of the choice of κ is less pronounced.

  20. Estimates of Single Sensor Error Statistics for the MODIS Matchup Database Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Kumar, C.; Podesta, G. P.; Minnett, P. J.; Kilpatrick, K. A.

    2017-12-01

    Sea surface temperature (SST) is a fundamental quantity for understanding weather and climate dynamics. Although sensors aboard satellites provide global and repeated SST coverage, a characterization of SST precision and bias is necessary for determining the suitability of SST retrievals in various applications. Guidance on how to derive meaningful error estimates is still being developed. Previous methods estimated retrieval uncertainty based on geophysical factors, e.g. season or "wet" and "dry" atmospheres, but the discrete nature of these bins led to spatial discontinuities in SST maps. Recently, a new approach clustered retrievals based on the terms (excluding offset) in the statistical algorithm used to estimate SST. This approach resulted in over 600 clusters - too many to understand the geophysical conditions that influence retrieval error. Using MODIS and buoy SST matchups (2002 - 2016), we use machine learning algorithms (recursive and conditional trees, random forests) to gain insight into geophysical conditions leading to the different signs and magnitudes of MODIS SST residuals (satellite SSTs minus buoy SSTs). MODIS retrievals were first split into three categories: < -0.4 C, -0.4 C ≤ residual ≤ 0.4 C, and > 0.4 C. These categories are heavily unbalanced, with residuals > 0.4 C being much less frequent. Performance of classification algorithms is affected by imbalance, thus we tested various rebalancing algorithms (oversampling, undersampling, combinations of the two). We consider multiple features for the decision tree algorithms: regressors from the MODIS SST algorithm, proxies for temperature deficit, and spatial homogeneity of brightness temperatures (BTs), e.g., the range of 11 μm BTs inside a 25 km2 area centered on the buoy location. These features and a rebalancing of classes led to an 81.9% accuracy when classifying SST retrievals into the < -0.4 C and -0.4 C ≤ residual ≤ 0.4 C categories. Spatial homogeneity in BTs consistently appears as a very important variable for classification, suggesting that unidentified cloud contamination still is one of the causes leading to negative SST residuals. Precision and accuracy of error estimates from our decision tree classifier are enhanced using this knowledge.

  1. Feedback Augmented Sub-Ranging (FASR) Quantizer

    NASA Technical Reports Server (NTRS)

    Guilligan, Gerard

    2012-01-01

    This innovation is intended to reduce the size, power, and complexity of pipeline analog-to-digital converters (ADCs) that require high resolution and speed along with low power. Digitizers are important components in any application where analog signals (such as light, sound, temperature, etc.) need to be digitally processed. The innovation implements amplification of a sampled residual voltage in a switched capacitor amplifier stage that does not depend on charge redistribution. The result is less sensitive to capacitor mismatches that cause gain errors, which are the main limitation of such amplifiers in pipeline ADCs. The residual errors due to mismatch are reduced by at least a factor of 16, which is equivalent to at least 4 bits of improvement. The settling time is also faster because of a higher feedback factor. In traditional switched capacitor residue amplifiers, closed-loop amplification of a sampled and held residue signal is achieved by redistributing sampled charge onto a feedback capacitor around a high-gain transconductance amplifier. The residual charge that was sampled during the acquisition or sampling phase is stored on two or more capacitors, often equal in value or integral multiples of each other. During the hold or amplification phase, all of the charge is redistributed onto one capacitor in the feedback loop of the amplifier to produce an amplified voltage. The key error source is the non-ideal ratios of feedback and input capacitors caused by manufacturing tolerances, called mismatches. The mismatches cause non-ideal closed-loop gain, leading to higher differential non-linearity. Traditional solutions to the mismatch errors are to use larger capacitor values (than dictated by thermal noise requirements) and/or complex calibration schemes, both of which increase the die size and power dissipation. The key features of this innovation are (1) the elimination of the need for charge redistribution to achieve an accurate closed-loop gain of two, (2) a higher feedback factor in the amplifier stage giving a higher closed-loop bandwidth compared to the prior art, and (3) reduced requirement for calibration. The accuracy of the new amplifier is mainly limited by the sampling networks parasitic capacitances, which should be minimized in relation to the sampling capacitors.

  2. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    PubMed

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.

  3. Linking Existing Instruments to Develop an Activity of Daily Living Item Bank.

    PubMed

    Li, Chih-Ying; Romero, Sergio; Bonilha, Heather S; Simpson, Kit N; Simpson, Annie N; Hong, Ickpyo; Velozo, Craig A

    2018-03-01

    This study examined dimensionality and item-level psychometric properties of an item bank measuring activities of daily living (ADL) across inpatient rehabilitation facilities and community living centers. Common person equating method was used in the retrospective veterans data set. This study examined dimensionality, model fit, local independence, and monotonicity using factor analyses and fit statistics, principal component analysis (PCA), and differential item functioning (DIF) using Rasch analysis. Following the elimination of invalid data, 371 veterans who completed both the Functional Independence Measure (FIM) and minimum data set (MDS) within 6 days were retained. The FIM-MDS item bank demonstrated good internal consistency (Cronbach's α = .98) and met three rating scale diagnostic criteria and three of the four model fit statistics (comparative fit index/Tucker-Lewis index = 0.98, root mean square error of approximation = 0.14, and standardized root mean residual = 0.07). PCA of Rasch residuals showed the item bank explained 94.2% variance. The item bank covered the range of θ from -1.50 to 1.26 (item), -3.57 to 4.21 (person) with person strata of 6.3. The findings indicated the ADL physical function item bank constructed from FIM and MDS measured a single latent trait with overall acceptable item-level psychometric properties, suggesting that it is an appropriate source for developing efficient test forms such as short forms and computerized adaptive tests.

  4. Safeguarding the process of drug administration with an emphasis on electronic support tools

    PubMed Central

    Seidling, Hanna M; Lampert, Anette; Lohmann, Kristina; Schiele, Julia T; Send, Alexander J F; Witticke, Diana; Haefeli, Walter E

    2013-01-01

    Aims The aim of this work is to understand the process of drug administration and identify points in the workflow that resulted in interventions by clinical information systems in order to improve patient safety. Methods To identify a generic way to structure the drug administration process we performed peer-group discussions and supplemented these discussions with a literature search for studies reporting errors in drug administration and strategies for their prevention. Results We concluded that the drug administration process might consist of up to 11 sub-steps, which can be grouped into the four sub-processes of preparation, personalization, application and follow-up. Errors in drug handling and administration are diverse and frequent and in many cases not caused by the patient him/herself, but by family members or nurses. Accordingly, different prevention strategies have been set in place with relatively few approaches involving e-health technology. Conclusions A generic structuring of the administration process and particular error-prone sub-steps may facilitate the allocation of prevention strategies and help to identify research gaps. PMID:24007450

  5. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields.

    PubMed

    Zhu, Wuming; Trickey, S B

    2017-12-28

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  6. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields

    NASA Astrophysics Data System (ADS)

    Zhu, Wuming; Trickey, S. B.

    2017-12-01

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  7. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  8. Review article: practical current issues in perioperative patient safety.

    PubMed

    Eichhorn, John H

    2013-02-01

    This brief review provides an overview and, importantly, a context perspective of relevant current practical issues in perioperative patient safety. The dramatic improvement in anesthesia patient safety over the last 30 years was not initiated by electronic monitors but, rather, largely by a set of behaviours known as "safety monitoring" that were then made decidedly more effective by extending the human senses through electronic monitoring, for example, capnography and pulse oximetry. In the highly developed world, this current success is threatened by complacency and production pressure. In some areas of the developing/underdeveloped world, the challenge is implementing the components of anesthesia practice that will bring safety improvements to parallel the overall current success, for instance, applying the World Federation of Societies of Anaesthesiologists (WFSA) "International Standards for A Safe Practice of Anaesthesia". Generally, expanding the current success in safety involves many practical issues. System issues involve research, effective reporting mechanisms and analysis/broadcasting of results, perioperative communication (including "speaking up to power"), and checklists. Monitoring issues involve enforcing existing published monitoring standards and also recognizing the risk of danger to the patient from hypoventilation during procedural sedation and from postoperative intravenous pain medications. Issues of clinical care include medication errors in the operating room, cerebral hypoperfusion (especially in the head-up position), dangers of airway management, postoperative residual weakness from muscle relaxants, operating room fires, and risks specific in obstetric anesthesia. Recognition of the issues outlined here and empowerment of all anesthesia professionals, from the most senior professors and administrators to the newest practitioners, should help maintain, solidify, and expand the improvements in anesthesia and perioperative patient safety.

  9. Automation, decision support, and expert systems in nephrology.

    PubMed

    Soman, Sandeep; Zasuwa, Gerard; Yee, Jerry

    2008-01-01

    Increasing data suggest that errors in medicine occur frequently and result in substantial harm to the patient. The Institute of Medicine report described the magnitude of the problem, and public interest in this issue, which was already large, has grown. The traditional approach in medicine has been to identify the persons making the errors and recommend corrective strategies. However, it has become increasingly clear that it is more productive to focus on the systems and processes through which care is provided. If these systems are set up in ways that would both make errors less likely and identify those that do occur and, at the same time, improve efficiency, then safety and productivity would be substantially improved. Clinical decision support systems (CDSSs) are active knowledge systems that use 2 or more items of patient data to generate case specific recommendations. CDSSs are typically designed to integrate a medical knowledge base, patient data, and an inference engine to generate case specific advice. This article describes how automation, templating, and CDSS improve efficiency, patient care, and safety by reducing the frequency and consequences of medical errors in nephrology. We discuss practical applications of these in 3 settings: a computerized anemia-management program (CAMP, Henry Ford Health System, Detroit, MI), vascular access surveillance systems, and monthly capitation notes in the hemodialysis unit.

  10. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  11. New Model for Estimating Glomerular Filtration Rate in Patients With Cancer

    PubMed Central

    Janowitz, Tobias; Williams, Edward H.; Marshall, Andrea; Ainsworth, Nicola; Thomas, Peter B.; Sammut, Stephen J.; Shepherd, Scott; White, Jeff; Mark, Patrick B.; Lynch, Andy G.; Jodrell, Duncan I.; Tavaré, Simon; Earl, Helena

    2017-01-01

    Purpose The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 (51Cr) EDTA excretion measurements (51Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)–adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care. PMID:28686534

  12. New Model for Estimating Glomerular Filtration Rate in Patients With Cancer.

    PubMed

    Janowitz, Tobias; Williams, Edward H; Marshall, Andrea; Ainsworth, Nicola; Thomas, Peter B; Sammut, Stephen J; Shepherd, Scott; White, Jeff; Mark, Patrick B; Lynch, Andy G; Jodrell, Duncan I; Tavaré, Simon; Earl, Helena

    2017-08-20

    Purpose The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 ( 51 Cr) EDTA excretion measurements ( 51 Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51 Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)-adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care.

  13. Arctic Ocean Tides from GRACE Satellite Accelerations

    NASA Astrophysics Data System (ADS)

    Killett, B.; Wahr, J. M.; Desai, S. D.; Yuan, D.; Watkins, M. M.

    2010-12-01

    Because missions such as TOPEX/POSEIDON don't extend to high latitudes, Arctic ocean tidal solutions aren't constrained by altimetry data. The resulting errors in tidal models alias into monthly GRACE gravity field solutions at all latitudes. Fortunately, GRACE inter-satellite ranging data can be used to solve for these tides directly. Seven years of GRACE inter-satellite acceleration data are inverted using a mascon approach to solve for residual amplitudes and phases of major solar and lunar tides in the Arctic ocean relative to FES 2004. Simulations are performed to test the inversion algorithm's performance, and uncertainty estimates are derived from the tidal signal over land. Truncation error magnitudes and patterns are compared to the residual tidal signals.

  14. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  15. How safe are federal regulations on occupational alcohol use?

    PubMed

    Howland, Jonathan; Almeida, Alissa; Rohsenow, Damaris; Minsky, Sara; Greece, Jacey

    2006-01-01

    Current US federal regulations on occupational alcohol use for safety-sensitive jobs do not account for impairment from low doses of alcohol and next day effects of heavy drinking. Research on the effects of low doses of alcohol on neurocognitive and simulated occupational tasks suggests that the current per se level of these regulations is set too high. Research on the effects of heavy drinking on next-day neurocognitive and simulated occupational performance is mixed and suggests that further research is needed to determine the safety of current "bottle-to-throttle" times. Although low-dose and residual drinking effects may pose low relative risk for occupational error, the aggregate contribution of these exposures to workplace problems may be substantial, given the number of people exposed.

  16. Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance

    NASA Astrophysics Data System (ADS)

    Cao, Fangfei; Liu, Jinkun

    2018-05-01

    In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.

  17. Optimization of control gain by operator adjustment

    NASA Technical Reports Server (NTRS)

    Kruse, W.; Rothbauer, G.

    1973-01-01

    An optimal gain was established by measuring errors at 5 discrete control gain settings in an experimental set-up consisting of a 2-dimensional, first-order pursuit tracking task performed by subjects (S's). No significant experience effect on optimum gain setting was found in the first experiment. During the second experiment, in which control gain was continuously adjustable, high experienced S's tended to reach the previously determined optimum gain quite accurately and quickly. Less experienced S's tended to select a marginally optimum gain either below or above the experimentally determined optimum depending on initial control gain setting, although mean settings of both groups were equal. This quick and simple method is recommended for selecting control gains for different control systems and forcing functions.

  18. Assessing and minimizing contamination in time of flight based validation data

    NASA Astrophysics Data System (ADS)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  19. Numerical simulation of electromagnetic surface treatment

    NASA Astrophysics Data System (ADS)

    Sonde, Emmanuel; Chaise, Thibaut; Nelias, Daniel; Robin, Vincent

    2018-01-01

    Surface treatment methods, such as shot peening or laser shock peening, are generally used to introduce superficial compressive residual stresses in mechanical parts. These processes are carried out during the manufacturing steps or for the purpose of repairing. The compressive residual stresses prevent the initiation and growth of cracks and thus improve the fatigue life of mechanical components. Electromagnetic pulse peening (EMP) is an innovative process that could be used to introduce compressive residual stresses in conductive materials. It acts by generating a high transient electromagnetic field near the working surface. In this paper, the EMP process is presented and a sequentially coupled electromagnetic and mechanical model is developed for its simulation. This 2D axisymmetric model is set up with the commercial finite element software SYSWELD. After description and validation, the numerical model is used to simulate a case of introducing residual stresses of compression in a nickel-based alloy 690 thick sample, by the means of electromagnetic pulses. The results are presented in terms of effective plastic strain and residual mean stress. The influence of the process parameters, such as current intensity and frequency, on the results is analyzed. Finally, the predictability of the process is shown by several correlation studies.

  20. The impacts of a pharmacist-managed outpatient clinic and chemotherapy-directed electronic order sets for monitoring oral chemotherapy.

    PubMed

    Battis, Brandon; Clifford, Linda; Huq, Mostaqul; Pejoro, Edrick; Mambourg, Scott

    2017-12-01

    Objectives Patients treated with oral chemotherapy appear to have less contact with the treating providers. As a result, safety, adherence, medication therapy monitoring, and timely follow-up may be compromised. The trend of treating cancer with oral chemotherapy agents is on the rise. However, standard clinical guidance is still lacking for prescribing, monitoring, patient education, and follow-up of patients on oral chemotherapy across the healthcare settings. The purpose of this project is to establish an oral chemotherapy monitoring clinic, to create drug and lab specific provider order sets for prescribing and lab monitoring, and ultimately to ensure safe and effective treatment of the veterans we serve. Methods A collaborative agreement was reached among oncology pharmacists, a pharmacy resident, two oncologists, and a physician assistant to establish a pharmacist-managed oral chemotherapy monitoring clinic at the VA Sierra Nevada Healthcare System. Drug-specific electronic order sets for prescribing and lab monitoring were created for initiating new drug therapy and prescription renewal. The order sets were created to be provider-centric, minimizing clicks needed to order necessary medications and lab monitoring. A standard progress note template was developed for documenting interventions made by the clinic. Patients new to an oral chemotherapy regimen were first counseled by an oncology pharmacist. The patients were then enrolled into the oral chemotherapy monitoring clinic for subsequent follow up and pharmacist interventions. Further, patients lacking monitoring or missing provider appointments were captured through a Clinical Dashboard developed by the US Department of Veterans Affairs (VA) Regional Office (VISN21) using SQL Server Reporting Services. Between September 2014 and April 2015, a total of 68 patients on different oral chemotherapy agents were enrolled into the clinic. Results Out of the 68 patients enrolled into the oral chemotherapy monitoring clinic, 31 patients (45%) were identified as having a therapy-related problem with their oral chemotherapy regimen on a gross measure for safety and appropriateness of medication management during the course of eight months follow-up between September 2014 and April 2015. In addition, the clinic helped to reestablish care for three patients (4.4%) who were lost to follow-up. The clinic identified 12 patients (17.6%) non-adherent to their prescribed regimen in some degree, where patients were suspected to miss doses due to delay in refilling prescriptions at least three days later than the expected date. However, these patients denied non-adherence. Among them, six patients (8.8%) were truly non-adherent. These patients stated that they had missed at least one day of therapy or were not taking the medication as prescribed. Medication regimen errors were discovered for five patients, accounting for a 7.3% medication-related error rate. Finally, seven patients (10.3%) were found to have an adverse reaction attributed to their oral chemotherapy. Two of them (2.9%) developed severe adverse reactions (Grade 3 and 4), which required hospitalization or immediate dose de-escalation. Conclusions The pilot clinic was able to identify current deficiencies and gaps in our practice settings for managing oral chemotherapy in a Veterans population. The oral chemotherapy monitoring clinic played a proactive role to identify preventable medication errors, monitor medication therapy, improve adherence, manage adverse drug reactions and re-establish care for patients who were lost to follow-up. The results suggest that close monitoring and follow-up of patients on oral chemotherapy is crucial to achieve therapeutic goals, improve patient safety and adherence, and to reduce drug adverse events and health care cost.

Top