Sample records for average position error

  1. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  2. First clinical experience in carbon ion scanning beam therapy: retrospective analysis of patient positional accuracy.

    PubMed

    Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi

    2012-09-01

    Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.

  3. Toward attenuating the impact of arm positions on electromyography pattern-recognition based motion classification in transradial amputees

    PubMed Central

    2012-01-01

    Background Electromyography (EMG) pattern-recognition based control strategies for multifunctional myoelectric prosthesis systems have been studied commonly in a controlled laboratory setting. Before these myoelectric prosthesis systems are clinically viable, it will be necessary to assess the effect of some disparities between the ideal laboratory setting and practical use on the control performance. One important obstacle is the impact of arm position variation that causes the changes of EMG pattern when performing identical motions in different arm positions. This study aimed to investigate the impacts of arm position variation on EMG pattern-recognition based motion classification in upper-limb amputees and the solutions for reducing these impacts. Methods With five unilateral transradial (TR) amputees, the EMG signals and tri-axial accelerometer mechanomyography (ACC-MMG) signals were simultaneously collected from both amputated and intact arms when performing six classes of arm and hand movements in each of five arm positions that were considered in the study. The effect of the arm position changes was estimated in terms of motion classification error and compared between amputated and intact arms. Then the performance of three proposed methods in attenuating the impact of arm positions was evaluated. Results With EMG signals, the average intra-position and inter-position classification errors across all five arm positions and five subjects were around 7.3% and 29.9% from amputated arms, respectively, about 1.0% and 10% low in comparison with those from intact arms. While ACC-MMG signals could yield a similar intra-position classification error (9.9%) as EMG, they had much higher inter-position classification error with an average value of 81.1% over the arm positions and the subjects. When the EMG data from all five arm positions were involved in the training set, the average classification error reached a value of around 10.8% for amputated arms. Using a two-stage cascade classifier, the average classification error was around 9.0% over all five arm positions. Reducing ACC-MMG channels from 8 to 2 only increased the average position classification error across all five arm positions from 0.7% to 1.0% in amputated arms. Conclusions The performance of EMG pattern-recognition based method in classifying movements strongly depends on arm positions. This dependency is a little stronger in intact arm than in amputated arm, which suggests that the investigations associated with practical use of a myoelectric prosthesis should use the limb amputees as subjects instead of using able-body subjects. The two-stage cascade classifier mode with ACC-MMG for limb position identification and EMG for limb motion classification may be a promising way to reduce the effect of limb position variation on classification performance. PMID:23036049

  4. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, S; Hong, C; Kim, M

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less

  5. Quality assurance of dynamic parameters in volumetric modulated arc therapy.

    PubMed

    Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N

    2012-07-01

    The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Three tests (for gantry position-dose delivery synchronisation, gantry speed-dose delivery synchronisation and MLC leaf speed and positions) were performed. The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the "beginning" and "end" errors. For MLC position verification, the maximum error was -2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. This experiment demonstrates that the variables and parameters of the Synergy S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC.

  6. Quality assurance of dynamic parameters in volumetric modulated arc therapy

    PubMed Central

    Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N

    2012-01-01

    Objectives The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy® S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Methods Three tests (for gantry position–dose delivery synchronisation, gantry speed–dose delivery synchronisation and MLC leaf speed and positions) were performed. Results The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the “beginning” and “end” errors. For MLC position verification, the maximum error was −2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. Conclusion This experiment demonstrates that the variables and parameters of the Synergy® S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC. PMID:22745206

  7. Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995

    NASA Technical Reports Server (NTRS)

    Blerman, Gregory S.

    1995-01-01

    Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.

  8. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  9. Jitter compensation circuit

    DOEpatents

    Sullivan, James S.; Ball, Don G.

    1997-01-01

    The instantaneous V.sub.co signal on a charging capacitor is sampled and the charge voltage on capacitor C.sub.o is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V.sub. co signal is split between a gain stage (G=0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V.sub.co signal is applied to the negative input of a differential amplifier gain stage (G=10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V.sub.co signal from the instantaneous value of sampled V.sub.co signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V.sub.co values squared divided by the total volt-second product of the magnetic compression circuit.

  10. Jitter compensation circuit

    DOEpatents

    Sullivan, J.S.; Ball, D.G.

    1997-09-09

    The instantaneous V{sub co} signal on a charging capacitor is sampled and the charge voltage on capacitor C{sub o} is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V{sub co} signal is split between a gain stage (G = 0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V{sub co} signal is applied to the negative input of a differential amplifier gain stage (G = 10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V{sub co} signal from the instantaneous value of sampled V{sub co} signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V{sub co} values squared divided by the total volt-second product of the magnetic compression circuit. 11 figs.

  11. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  12. A SEASAT SASS simulation experiment to quantify the errors related to a + or - 3 hour intermittent assimilation technique

    NASA Technical Reports Server (NTRS)

    Sylvester, W. B.

    1984-01-01

    A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.

  13. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    NASA Astrophysics Data System (ADS)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  14. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning

    PubMed Central

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-01-01

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively. PMID:28387744

  15. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning.

    PubMed

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-04-07

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively.

  16. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  17. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chengqiang, L; Yin, Y; Chen, L

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans.more » Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.« less

  20. An Adaptive 6-DOF Tracking Method by Hybrid Sensing for Ultrasonic Endoscopes

    PubMed Central

    Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin

    2014-01-01

    In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°. PMID:24915179

  1. Reliability and Validity Assessment of a Linear Position Transducer

    PubMed Central

    Garnacho-Castaño, Manuel V.; López-Lastra, Silvia; Maté-Muñoz, José L.

    2015-01-01

    The objectives of the study were to determine the validity and reliability of peak velocity (PV), average velocity (AV), peak power (PP) and average power (AP) measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain) during two resistance exercises, bench press (BP) and full back squat (BS), performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2). Intraclass correlation coefficients (ICCs) indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W). Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W). Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP) make this device a useful tool for monitoring resistance training. Key points This study determined the validity and reliability of peak velocity, average velocity, peak power and average power measurements made using a linear position transducer The Tendo Weight-lifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and power. PMID:25729300

  2. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.

  3. Why GPS makes distances bigger than they are

    PubMed Central

    Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried

    2016-01-01

    ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610

  4. Average capacity of the ground to train communication link of a curved track in the turbulence of gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Yang, Yanqiu; Yu, Lin; Zhang, Yixin

    2017-04-01

    A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.

  5. Direct evidence for a position input to the smooth pursuit system.

    PubMed

    Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe

    2005-07-01

    When objects move in our environment, the orientation of the visual axis in space requires the coordination of two types of eye movements: saccades and smooth pursuit. The principal input to the saccadic system is position error, whereas it is velocity error for the smooth pursuit system. Recently, it has been shown that catch-up saccades to moving targets are triggered and programmed by using velocity error in addition to position error. Here, we show that, when a visual target is flashed during ongoing smooth pursuit, it evokes a smooth eye movement toward the flash. The velocity of this evoked smooth movement is proportional to the position error of the flash; it is neither influenced by the velocity of the ongoing smooth pursuit eye movement nor by the occurrence of a saccade, but the effect is absent if the flash is ignored by the subject. Furthermore, the response started around 85 ms after the flash presentation and decayed with an average time constant of 276 ms. Thus this is the first direct evidence of a position input to the smooth pursuit system. This study shows further evidence for a coupling between saccadic and smooth pursuit systems. It also suggests that there is an interaction between position and velocity error signals in the control of more complex movements.

  6. Comparing the TYCHO Catalogue with CCD Astrograph Observations

    NASA Astrophysics Data System (ADS)

    Zacharias, N.; Hoeg, E.; Urban, S. E.; Corbin, T. E.

    1997-08-01

    Selected fields around radio-optical reference frame sources have been observed with the U.S. Naval Observatory CCD astrograph (UCA). This telescope is equipped with a red-corrected 206mm 5-element lens and a 4k by 4k CCD camera which provides a 1 square degree field of view. Positions with internal precisions of 20 mas for stars in the 7 to 12 magnitude range have been obtained with 30 second exposures. A comparison is made with the Tycho Catalogue, which is accurate to about 5 to 50 mas at mean epoch of J1991.25, depending on the magnitude of the star. Preliminary proper motions are obtained using the Astrographic Catalogue (AC) to update the Tycho positions to the epoch of the UCA observations, which adds an error contribution of about 15 to 20 mas. Individual CCD frames have been reduced with an average of 30 Tycho reference stars per frame. A linear plate model gives an average adjustment standard error of 46 mas, consistent with the internal errors. The UCA is capable of significantly improving the positions of Tycho stars fainter than about visual magnitude 9.5.

  7. Asymmetric affective forecasting errors and their correlation with subjective well-being

    PubMed Central

    2018-01-01

    Aims Social scientists have postulated that the discrepancy between achievements and expectations affects individuals' subjective well-being. Still, little has been done to qualify and quantify such a psychological effect. Our empirical analysis assesses the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being. Data We use longitudinal data on a representative sample of 13,431 individuals from the German Socio-Economic Panel. In our sample, 52% of individuals are females, average age is 43 years, average years of education is 11.4 and 27% of our sample lives in East Germany. Subjective well-being (measured by self-reported life satisfaction) is assessed on a 0–10 discrete scale and its sample average is equal to 6.75 points. Methods We develop a simple theoretical framework to assess the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being, properly accounting for the endogenous adjustment of expectations to positive and negative affective forecasting errors, and use it to derive testable predictions. Given the theoretical framework, we estimate two panel-data equations, the first depicting the association between positive and negative affective forecasting errors and the successive level of subjective well-being and the second describing the correlation between subjective well-being expectations for the future and hedonic failures and successes. Our models control for individual fixed effects and a large battery of time-varying demographic characteristics, health and socio-economic status. Results and conclusions While surpassing expectations is uncorrelated with subjective well-being, failing to match expectations is negatively associated with subsequent realizations of subjective well-being. Expectations are positively (negatively) correlated to positive (negative) forecasting errors. We speculate that in the first case the positive adjustment in expectations is strong enough to cancel out the potential positive effects on subjective well-being of beaten expectations, while in the second case it is not, and individuals persistently bear the negative emotional consequences of not achieving expectations. PMID:29513685

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuangrod, T; Simpson, J; Greer, P

    Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less

  9. Automatic learning rate adjustment for self-supervising autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  10. Development of a patient positioning error compensation tool for Korea Heavy-Ion Medical Accelerator Treatment Center

    NASA Astrophysics Data System (ADS)

    Kim, Min-Joo; Suh, Tae-Suk; Cho, Woong; Jung, Won-Gyun

    2015-07-01

    In this study, a potential validation tool for compensating for the patient positioning error was developed by using 2D/3D and 3D/3D image registration. For 2D/3D registration, digitallyreconstructed radiography (DRR) and three-dimensional computed tomography (3D-CT) images were applied. The ray-casting algorithm is the most straightforward method for generating DRR, so we adopted the traditional ray-casting method, which finds the intersections of a ray with all objects, voxels of the 3D-CT volume in the scene. The similarity between the extracted DRR and the orthogonal image was measured by using a normalized mutual information method. Two orthogonal images were acquired from a Cyber-knife system from the anterior-posterior (AP) and right lateral (RL) views. The 3D-CT and the two orthogonal images of an anthropomorphic phantom and of the head and neck of a cancer patient were used in this study. For 3D/3D registration, planning CT and in-room CT images were applied. After registration, the translation and the rotation factors were calculated to position a couch to be movable in six dimensions. Registration accuracies and average errors of 2.12 mm ± 0.50 mm for transformations and 1.23 ° ± 0.40 ° for rotations were acquired by using 2D/3D registration with the anthropomorphic Alderson-Rando phantom. In addition, registration accuracies and average errors of 0.90 mm ± 0.30 mm for transformations and 1.00 ° ± 0.2 ° for rotations were acquired by using CT image sets. We demonstrated that this validation tool could compensate for patient positioning errors. In addition, this research could be a fundamental step in compensating for patient positioning errors at the Korea Heavy-ion Medical Accelerator Treatment Center.

  11. NOTE: Optimization of megavoltage CT scan registration settings for thoracic cases on helical tomotherapy

    NASA Astrophysics Data System (ADS)

    Woodford, Curtis; Yartsev, Slav; Van Dyk, Jake

    2007-08-01

    This study aims to investigate the settings that provide optimum registration accuracy when registering megavoltage CT (MVCT) studies acquired on tomotherapy with planning kilovoltage CT (kVCT) studies of patients with lung cancer. For each experiment, the systematic difference between the actual and planned positions of the thorax phantom was determined by setting the phantom up at the planning isocenter, generating and registering an MVCT study. The phantom was translated by 5 or 10 mm, MVCT scanned, and registration was performed again. A root-mean-square equation that calculated the residual error of the registration based on the known shift and systematic difference was used to assess the accuracy of the registration process. The phantom study results for 18 combinations of different MVCT/kVCT registration options are presented and compared to clinical registration data from 17 lung cancer patients. MVCT studies acquired with coarse (6 mm), normal (4 mm) and fine (2 mm) slice spacings could all be registered with similar residual errors. No specific combination of resolution and fusion selection technique resulted in a lower residual error. A scan length of 6 cm with any slice spacing registered with the full image fusion selection technique and fine resolution will result in a low residual error most of the time. On average, large corrections made manually by clinicians to the automatic registration values are infrequent. Small manual corrections within the residual error averages of the registration process occur, but their impact on the average patient position is small. Registrations using the full image fusion selection technique and fine resolution of 6 cm MVCT scans with coarse slices have a low residual error, and this strategy can be clinically used for lung cancer patients treated on tomotherapy. Automatic registration values are accurate on average, and a quick verification on a sagittal MVCT slice should be enough to detect registration outliers.

  12. Quantification of evaporation induced error in atom probe tomography using molecular dynamics simulation.

    PubMed

    Chen, Shu Jian; Yao, Xupei; Zheng, Changxi; Duan, Wen Hui

    2017-11-01

    Non-equilibrium molecular dynamics was used to simulate the dynamics of atoms at the atom probe surface and five objective functions were used to quantify errors. The results suggested that before ionization, thermal vibration and collision caused the atoms to displace up to 1Å and 25Å respectively. The average atom displacements were found to vary between 0.2 and 0.5Å. About 9 to 17% of the atoms were affected by collision. Due to the effects of collision and ion-ion repulsion, the back-calculated positions were on average 0.3-0.5Å different from the pre-ionized positions of the atoms when the number of ions generated per pulse was minimal. This difference could increase up to 8-10Å when 1.5ion/nm 2 were evaporated per pulse. On the basis of the results, surface ion density was considered an important factor that needed to be controlled to minimize error in the evaporation process. Copyright © 2017. Published by Elsevier B.V.

  13. Design of a Pneumatic Tool for Manual Drilling Operations in Confined Spaces

    NASA Astrophysics Data System (ADS)

    Janicki, Benjamin

    This master's thesis describes the design process and testing results for a pneumatically actuated, manually-operated tool for confined space drilling operations. The purpose of this device is to back-drill pilot holes inside a commercial airplane wing. It is lightweight, and a "locator pin" enables the operator to align the drill over a pilot hole. A suction pad stabilizes the system, and an air motor and flexible drive shaft power the drill. Two testing procedures were performed to determine the practicality of this prototype. The first was the "offset drill test", which qualified the exit hole position error due to an initial position error relative to the original pilot hole. The results displayed a linear relationship, and it was determined that position errors of less than .060" would prevent the need for rework, with errors of up to .030" considered acceptable. For the second test, a series of holes were drilled with the pneumatic tool and analyzed for position error, diameter range, and cycle time. The position errors and hole diameter range were within the allowed tolerances. The average cycle time was 45 seconds, 73 percent of which was for drilling the hole, and 27 percent of which was for positioning the device. Recommended improvements are discussed in the conclusion, and include a more durable flexible drive shaft, a damper for drill feed control, and a more stable locator pin.

  14. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  15. Astrometric observations of visual binaries using 26-inch refractor during 2007-2014 at Pulkovo

    NASA Astrophysics Data System (ADS)

    Izmailov, I. S.; Roshchina, E. A.

    2016-04-01

    We present the results of 15184 astrometric observations of 322 visual binaries carried out in 2007-2014 at Pulkovo observatory. In 2007, the 26-inch refractor ( F = 10413 mm, D = 65 cm) was equipped with the CCD camera FLI ProLine 09000 (FOV 12' × 12', 3056 × 3056 pixels, 0.238 arcsec pixel-1). Telescope automation and weather monitoring system installation allowed us to increase the number of observations significantly. Visual binary and multiple systems with an angular distance in the interval 1."1-78."6 with 7."3 on average were included in the observing program. The results were studied in detail for systematic errors using calibration star pairs. There was no detected dependence of errors on temperature, pressure, and hour angle. The dependence of the 26-inch refractor's scale on temperature was taken into account in calculations. The accuracy of measurement of a single CCD image is in the range of 0."0005 to 0."289, 0."021 on average along both coordinates. Mean errors in annual average values of angular distance and position angle are equal to 0."005 and 0.°04 respectively. The results are available here http://izmccd.puldb.ru/vds.htmand in the Strasbourg Astronomical Data Center (CDS). In the catalog, the separations and position angles per night of observation and annual average as well as errors for all the values and standard deviations of a single observation are presented. We present the results of comparison of 50 pairs of stars with known orbital solutions with ephemerides.

  16. Real-Time Identification of Wheel Terrain Interaction Models for Enhanced Autonomous Vehicle Mobility

    DTIC Science & Technology

    2014-04-24

    tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and

  17. In vivo dose verification method in catheter based high dose rate brachytherapy.

    PubMed

    Jaselskė, Evelina; Adlienė, Diana; Rudžianskas, Viktoras; Urbonavičius, Benas Gabrielis; Inčiūra, Arturas

    2017-12-01

    In vivo dosimetry is a powerful tool for dose verification in radiotherapy. Its application in high dose rate (HDR) brachytherapy is usually limited to the estimation of gross errors, due to inability of the dosimetry system/ method to record non-uniform dose distribution in steep dose gradient fields close to the radioactive source. In vivo dose verification in interstitial catheter based HDR brachytherapy is crucial since the treatment is performed inserting radioactive source at the certain positions within the catheters that are pre-implanted into the tumour. We propose in vivo dose verification method for this type of brachytherapy treatment which is based on the comparison between experimentally measured and theoretical dose values calculated at well-defined locations corresponding dosemeter positions in the catheter. Dose measurements were performed using TLD 100-H rods (6 mm long, 1 mm diameter) inserted in a certain sequences into additionally pre-implanted dosimetry catheter. The adjustment of dosemeter positioning in the catheter was performed using reconstructed CT scans of patient with pre-implanted catheters. Doses to three Head&Neck and one Breast cancer patient have been measured during several randomly selected treatment fractions. It was found that the average experimental dose error varied from 4.02% to 12.93% during independent in vivo dosimetry control measurements for selected Head&Neck cancer patients and from 7.17% to 8.63% - for Breast cancer patient. Average experimental dose error was below the AAPM recommended margin of 20% and did not exceed the measurement uncertainty of 17.87% estimated for this type of dosemeters. Tendency of slightly increasing average dose error was observed in every following treatment fraction of the same patient. It was linked to the changes of theoretically estimated dosemeter positions due to the possible patient's organ movement between different treatment fractions, since catheter reconstruction was performed for the first treatment fraction only. These findings indicate potential for further average dose error reduction in catheter based brachytherapy by at least 2-3% in the case that catheter locations will be adjusted before each following treatment fraction, however it requires more detailed investigation. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Sensitivity in error detection of patient specific QA tools for IMRT plans

    NASA Astrophysics Data System (ADS)

    Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.

    2016-03-01

    The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.

  19. Early math and reading achievement are associated with the error positivity.

    PubMed

    Kim, Matthew H; Grammer, Jennie K; Marulis, Loren M; Carrasco, Melisa; Morrison, Frederick J; Gehring, William J

    2016-12-01

    Executive functioning (EF) and motivation are associated with academic achievement and error-related ERPs. The present study explores whether early academic skills predict variability in the error-related negativity (ERN) and error positivity (Pe). Data from 113 three- to seven-year-old children in a Go/No-Go task revealed that stronger early reading and math skills predicted a larger Pe. Closer examination revealed that this relation was quadratic and significant for children performing at or near grade level, but not significant for above-average achievers. Early academics did not predict the ERN. These findings suggest that the Pe - which reflects individual differences in motivational processes as well as attention - may be associated with early academic achievement. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Suh, T; Cho, W

    Purpose: A potential validation tool for compensating patient positioning error was developed using 2D/3D and 3D/3D image registration. Methods: For 2D/3D registration, digitally reconstructed radiography (DRR) and three-dimensional computed tomography (3D-CT) images were applied. The ray-casting algorithm is the most straightforward method for generating DRR. We adopted the traditional ray-casting method, which finds the intersections of a ray with all objects, voxels of the 3D-CT volume in the scene. The similarity between the extracted DRR and orthogonal image was measured by using a normalized mutual information method. Two orthogonal images were acquired from a Cyber-Knife system from the anterior-posterior (AP)more » and right lateral (RL) views. The 3D-CT and two orthogonal images of an anthropomorphic phantom and head and neck cancer patient were used in this study. For 3D/3D registration, planning CT and in-room CT image were applied. After registration, the translation and rotation factors were calculated to position a couch to be movable in six dimensions. Results: Registration accuracies and average errors of 2.12 mm ± 0.50 mm for transformations and 1.23° ± 0.40° for rotations were acquired by 2D/3D registration using an anthropomorphic Alderson-Rando phantom. In addition, registration accuracies and average errors of 0.90 mm ± 0.30 mm for transformations and 1.00° ± 0.2° for rotations were acquired using CT image sets. Conclusion: We demonstrated that this validation tool could compensate for patient positioning error. In addition, this research could be the fundamental step for compensating patient positioning error at the first Korea heavy-ion medical accelerator treatment center.« less

  2. Medium-Range Forecast Skill for Extraordinary Arctic Cyclones in Summer of 2008-2016

    NASA Astrophysics Data System (ADS)

    Yamagami, Akio; Matsueda, Mio; Tanaka, Hiroshi L.

    2018-05-01

    Arctic cyclones (ACs) are a severe atmospheric phenomenon that affects the Arctic environment. This study assesses the forecast skill of five leading operational medium-range ensemble forecasts for 10 extraordinary ACs that occurred in summer during 2008-2016. Average existence probability of the predicted ACs was >0.9 at lead times of ≤3.5 days. Average central position error of the predicted ACs was less than half of the mean radius of the 10 ACs (469.1 km) at lead times of 2.5-4.5 days. Average central pressure error of the predicted ACs was 5.5-10.7 hPa at such lead times. Therefore, the operational ensemble prediction systems generally predict the position of ACs within 469.1 km 2.5-4.5 days before they mature. The forecast skill for the extraordinary ACs is lower than that for midlatitude cyclones in the Northern Hemisphere but similar to that in the Southern Hemisphere.

  3. High-precision approach to localization scheme of visible light communication based on artificial neural networks and modified genetic algorithms

    NASA Astrophysics Data System (ADS)

    Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Chen, Hao; Cai, Ye; Chen, Yingcong

    2017-10-01

    An indoor positioning algorithm based on visible light communication (VLC) is presented. This algorithm is used to calculate a three-dimensional (3-D) coordinate of an indoor optical wireless environment, which includes sufficient orders of multipath reflections from reflecting surfaces of the room. Leveraging the global optimization ability of the genetic algorithm (GA), an innovative framework for 3-D position estimation based on a modified genetic algorithm is proposed. Unlike other techniques using VLC for positioning, the proposed system can achieve indoor 3-D localization without making assumptions about the height or acquiring the orientation angle of the mobile terminal. Simulation results show that an average localization error of less than 1.02 cm can be achieved. In addition, in most VLC-positioning systems, the effect of reflection is always neglected and its performance is limited by reflection, which makes the results not so accurate for a real scenario and the positioning errors at the corners are relatively larger than other places. So, we take the first-order reflection into consideration and use artificial neural network to match the model of a nonlinear channel. The studies show that under the nonlinear matching of direct and reflected channels the average positioning errors of four corners decrease from 11.94 to 0.95 cm. The employed algorithm is emerged as an effective and practical method for indoor localization and outperform other existing indoor wireless localization approaches.

  4. SU-E-T-318: The Effect of Patient Positioning Errors On Target Coverage and Cochlear Dose in Stereotactic Radiosurgery Treatment of Acoustic Neuromas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellamonica, D.; Luo, G.; Ding, G.

    Purpose: Setup errors on the order of millimeters may cause under-dosing of targets and significant changes in dose to critical structures especially when planning with tight margins in stereotactic radiosurgery. This study evaluates the effects of these types of patient positioning uncertainties on planning target volume (PTV) coverage and cochlear dose for stereotactic treatments of acoustic neuromas. Methods: Twelve acoustic neuroma patient treatment plans were retrospectively evaluated in Brainlab iPlan RT Dose 4.1.3. All treatment beams were shaped by HDMLC from a Varian TX machine. Seven patients had planning margins of 2mm, five had 1–1.5mm. Six treatment plans were createdmore » for each patient simulating a 1mm setup error in six possible directions: anterior-posterior, lateral, and superiorinferior. The arcs and HDMLC shapes were kept the same for each plan. Change in PTV coverage and mean dose to the cochlea was evaluated for each plan. Results: The average change in PTV coverage for the 72 simulated plans was −1.7% (range: −5 to +1.1%). The largest average change in coverage was observed for shifts in the patient's superior direction (−2.9%). The change in mean cochlear dose was highly dependent upon the direction of the shift. Shifts in the anterior and superior direction resulted in an average increase in dose of 13.5 and 3.8%, respectively, while shifts in the posterior and inferior direction resulted in an average decrease in dose of 17.9 and 10.2%. The average change in dose to the cochlea was 13.9% (range: 1.4 to 48.6%). No difference was observed based on the size of the planning margin. Conclusion: This study indicates that if the positioning uncertainty is kept within 1mm the setup errors may not result in significant under-dosing of the acoustic neuroma target volumes. However, the change in mean cochlear dose is highly dependent upon the direction of the shift.« less

  5. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  6. VizieR Online Data Catalog: V and R CCD photometry of visual binaries (Abad+, 2004)

    NASA Astrophysics Data System (ADS)

    Abad, C.; Docobo, J. A.; Lanchares, V.; Lahulla, J. F.; Abelleira, P.; Blanco, J.; Alvarez, C.

    2003-11-01

    Table 1 gives relevant data for the visual binaries observed. Observations were carried out over a short period of time, therefore we assign the mean epoch (1998.58) for the totality of data. Data of individual stars are presented as average data with errors, by parameter, when various observations have been calculated, as well as the number of observations involved. Errors corresponding to astrometric relative positions between components are always present. For single observations, parameter fitting errors, specially for dx and dy parameters, have been calculated analysing the chi2 test around the minimum. Following the rules for error propagation, theta and rho errors can be estimated. Then, Table 1 shows single observation errors with an additional significant digit. When a star does not have known references, we include it in Table 2, where J2000 position and magnitudes are from the USNO-A2.0 catalogue (Monet et al., 1998, Cat. ). (2 data files).

  7. WE-A-17A-03: Catheter Digitization in High-Dose-Rate Brachytherapy with the Assistance of An Electromagnetic (EM) Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, AL; Bhagwat, MS; Buzurovic, I

    Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less

  8. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A.

    2017-11-01

    This paper considers Leica backpack and photogrammetric surveys of a mediaeval bastion in Padua, Italy. Furhtermore, terrestrial laser scanning (TLS) survey is considered in order to provide a state of the art reconstruction of the bastion. Despite control points are typically used to avoid deformations in photogrammetric surveys and ensure correct scaling of the reconstruction, in this paper a different approach is considered: this work is part of a project aiming at the development of a system exploiting ultra-wide band (UWB) devices to provide correct scaling of the reconstruction. In particular, low cost Pozyx UWB devices are used to estimate camera positions during image acquisitions. Then, in order to obtain a metric reconstruction, scale factor in the photogrammetric survey is estimated by comparing camera positions obtained from UWB measurements with those obtained from photogrammetric reconstruction. Compared with the TLS survey, the considered photogrammetric model of the bastion results in a RMSE of 21.9cm, average error 13.4cm, and standard deviation 13.5cm. Excluding the final part of the bastion left wing, where the presence of several poles make reconstruction more difficult, (RMSE) fitting error is 17.3cm, average error 11.5cm, and standard deviation 9.5cm. Instead, comparison of Leica backpack and TLS surveys leads to an average error of 4.7cm and standard deviation 0.6cm (4.2cm and 0.3cm, respectively, by excluding the final part of the left wing).

  9. SU-F-E-09: Respiratory Signal Prediction Based On Multi-Layer Perceptron Neural Network Using Adjustable Training Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, W; Jiang, M; Yin, F

    Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, amore » Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results demonstrate that the ASMLP respiratory prediction method is more accurate than MLP method and can improve the respiration forecast accuracy.« less

  10. Patient motion tracking in the presence of measurement errors.

    PubMed

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  11. Optimal sensor fusion for land vehicle navigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, J.D.

    1990-10-01

    Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. Thismore » paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.« less

  12. SU-E-T-646: Quality Assurance of Truebeam Multi-Leaf Collimator Using a MLC QA Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Lu, J; Hong, D

    2015-06-15

    Purpose: To perform a routine quality assurance procedure for Truebeam multi-leaf collimator (MLC) using MLC QA phantom, verify the stability and reliability of MLC during the treatment. Methods: MLC QA phantom is a specialized phantom for MLC quality assurance (QA), and contains five radio-opaque spheres that are embedded in an “L” shape. The phantom was placed isocentrically on the Truebeam treatment couch for the tests. A quality assurance plan was setted up in the Eclipse v10.0, the fields that need to be delivered in order to acquire the necessary images, the MLC shapes can then be obtained by the images.more » The images acquired by the electronic portal imaging device (EPID), and imported into the PIPSpro software for the analysis. The tests were delivered twelve weeks (once a week) to verify consistency of the delivery, and the images are acquired in the same manner each time. Results: For the Leaf position test, the average position error was 0.23mm±0.02mm (range: 0.18mm∼0.25mm). The Leaf width was measured at the isocenter, the average error was 0.06mm±0.02mm (range: 0.02mm∼0.08mm) for the Leaf width test. Multi-Port test showed the dynamic leaf shift error, the average error was 0.28mm±0.03mm (range: 0.2mm∼0.35mm). For the leaf transmission test, the average inter-leaf leakage value was 1.0%±0.17% (range: 0.8%∼1.3%) and the average inter-bank leakage value was 32.6%±2.1% (range: 30.2%∼36.1%). Conclusion: By the test of 12 weeks, the MLC system of the Truebeam is running in a good condition and the MLC system can be steadily and reliably carried out during the treatment. The MLC QA phantom is a useful test tool for the MLC QA.« less

  13. Pedestrian dead reckoning employing simultaneous activity recognition cues

    NASA Astrophysics Data System (ADS)

    Altun, Kerem; Barshan, Billur

    2012-02-01

    We consider the human localization problem using body-worn inertial/magnetic sensor units. Inertial sensors are characterized by a drift error caused by the integration of their rate output to obtain position information. Because of this drift, the position and orientation data obtained from inertial sensors are reliable over only short periods of time. Therefore, position updates from externally referenced sensors are essential. However, if the map of the environment is known, the activity context of the user can provide information about his position. In particular, the switches in the activity context correspond to discrete locations on the map. By performing localization simultaneously with activity recognition, we detect the activity context switches and use the corresponding position information as position updates in a localization filter. The localization filter also involves a smoother that combines the two estimates obtained by running the zero-velocity update algorithm both forward and backward in time. We performed experiments with eight subjects in indoor and outdoor environments involving walking, turning and standing activities. Using a spatial error criterion, we show that the position errors can be decreased by about 85% on the average. We also present the results of two 3D experiments performed in realistic indoor environments and demonstrate that it is possible to achieve over 90% error reduction in position by performing localization simultaneously with activity recognition.

  14. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  15. Detector Position Estimation for PET Scanners.

    PubMed

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-06-11

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  16. Effect of forest canopy on GPS-based movement data

    Treesearch

    Nicholas J. DeCesare; John R. Squires; Jay A. Kolbe

    2005-01-01

    The advancing role of Global Positioning System (GPS) technology in ecology has made studies of animal movement possible for larger and more vagile species. A simple field test revealed that lengths of GPS-based movement data were strongly biased (P<0.001) by effects of forest canopy. Global Positioning System error added an average of 27.5% additional...

  17. Effect of endorectal balloon positioning errors on target deformation and dosimetric quality during prostate SBRT

    NASA Astrophysics Data System (ADS)

    Jones, Bernard L.; Gan, Gregory; Kavanagh, Brian; Miften, Moyed

    2013-11-01

    An inflatable endorectal balloon (ERB) is often used during stereotactic body radiation therapy (SBRT) for treatment of prostate cancer in order to reduce both intrafraction motion of the target and risk of rectal toxicity. However, the ERB can exert significant force on the prostate, and this work assessed the impact of ERB position errors on deformation of the prostate and treatment dose metrics. Seventy-one cone-beam computed tomography (CBCT) image datasets of nine patients with clinical stage T1cN0M0 prostate cancer were studied. An ERB (Flexi-Cuff, EZ-EM, Westbury, NY) inflated with 60 cm3 of air was used during simulation and treatment, and daily kilovoltage (kV) CBCT imaging was performed to localize the prostate. The shape of the ERB in each CBCT was analyzed to determine errors in position, size, and shape. A deformable registration algorithm was used to track the dose received by (and deformation of) the prostate, and dosimetric values such as D95, PTV coverage, and Dice coefficient for the prostate were calculated. The average balloon position error was 0.5 cm in the inferior direction, with errors ranging from 2 cm inferiorly to 1 cm superiorly. The prostate was deformed primarily in the AP direction, and tilted primarily in the anterior-posterior/superior-inferior plane. A significant correlation was seen between errors in depth of ERB insertion (DOI) and mean voxel-wise deformation, prostate tilt, Dice coefficient, and planning-to-treatment prostate inter-surface distance (p < 0.001). Dosimetrically, DOI is negatively correlated with prostate D95 and PTV coverage (p < 0.001). For the model of ERB studied, error in ERB position can cause deformations in the prostate that negatively affect treatment, and this additional aspect of setup error should be considered when ERBs are used for prostate SBRT. Before treatment, the ERB position should be verified, and the ERB should be adjusted if the error is observed to exceed tolerable values.

  18. Feasibility of predicting tumor motion using online data acquired during treatment and a generalized neural network optimized with offline patient tumor trajectories.

    PubMed

    Teo, Troy P; Ahmed, Syed Bilal; Kawalec, Philip; Alayoubi, Nadia; Bruce, Neil; Lyn, Ethan; Pistorius, Stephen

    2018-02-01

    The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined. A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes. An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data. This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data. © 2017 American Association of Physicists in Medicine.

  19. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.

  20. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  1. SU-G-TeP4-12: Individual Beam QA for a Robotic Radiosurgery System Using a Scintillator Cone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuinness, C; Descovich, M; Sudhyadhom, A

    2016-06-15

    Purpose: The targeting accuracy of the Cyberknife system is measured by end-to-end tests delivering multiple isocentric beams to a point in space. While the targeting accuracy of two representative beams can be determined by a Winston-Lutz-type test, no test is available today to determine the targeting accuracy of each clinical beam. We used a scintillator cone to measure the accuracy of each individual beam. Methods: The XRV-124 from Logos Systems Int’l is a scintillator cone with an imaging system that is able to measure individual beam vectors and a resulting error between planned and measured beam coordinates. We measured themore » targeting accuracy of isocentric and non-isocentric beams for a number of test cases using the Iris and the fixed collimator. The average difference between plan and measured beam position was 0.8–1.2mm across the collimator sizes and plans considered here. The max error for a single beam was 2.5mm for the isocentric plans, and 1.67mm for the non-isocentric plans. The standard deviation of the differences was 0.5mm or less. Conclusion: The CyberKnife System is specified to have an overall targeting accuracy for static targets of less than 0.95mm. In E2E tests using the XRV124 system we measure average beam accuracy between 0.8 to 1.23mm, with maximum of 2.5mm. We plan to investigate correlations between beam position error and robot position, and to quantify the effect of beam position errors on patient specific plans. Martina Descovich has received research support and speaker honoraria from Accuray.« less

  2. New hybrid reverse differential pulse position width modulation scheme for wireless optical communication

    NASA Astrophysics Data System (ADS)

    Liao, Renbo; Liu, Hongzhan; Qiao, Yaojun

    2014-05-01

    In order to improve the power efficiency and reduce the packet error rate of reverse differential pulse position modulation (RDPPM) for wireless optical communication (WOC), a hybrid reverse differential pulse position width modulation (RDPPWM) scheme is proposed, based on RDPPM and reverse pulse width modulation. Subsequently, the symbol structure of RDPPWM is briefly analyzed, and its performance is compared with that of other modulation schemes in terms of average transmitted power, bandwidth requirement, and packet error rate over ideal additive white Gaussian noise (AWGN) channels. Based on the given model, the simulation results show that the proposed modulation scheme has the advantages of improving the power efficiency and reducing the bandwidth requirement. Moreover, in terms of error probability performance, RDPPWM can achieve a much lower packet error rate than that of RDPPM. For example, at the same received signal power of -28 dBm, the packet error rate of RDPPWM can decrease to 2.6×10-12, while that of RDPPM is 2.2×10. Furthermore, RDPPWM does not need symbol synchronization at the receiving end. These considerations make RDPPWM a favorable candidate to select as the modulation scheme in the WOC systems.

  3. A state-based probabilistic model for tumor respiratory motion prediction

    NASA Astrophysics Data System (ADS)

    Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth

    2010-12-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.

  4. Accuracy of an acoustic location system for monitoring the position of duetting songbirds in tropical forest

    PubMed Central

    Mennill, Daniel J.; Burt, John M.; Fristrup, Kurt M.; Vehrencamp, Sandra L.

    2008-01-01

    A field test was conducted on the accuracy of an eight-microphone acoustic location system designed to triangulate the position of duetting rufous-and-white wrens (Thryothorus rufalbus) in Costa Rica’s humid evergreen forest. Eight microphones were set up in the breeding territories of twenty pairs of wrens, with an average inter-microphone distance of 75.2±2.6 m. The array of microphones was used to record antiphonal duets broadcast through stereo loudspeakers. The positions of the loudspeakers were then estimated by evaluating the delay with which the eight microphones recorded the broadcast sounds. Position estimates were compared to coordinates surveyed with a global-positioning system (GPS). The acoustic location system estimated the position of loudspeakers with an error of 2.82±0.26 m and calculated the distance between the “male” and “female” loudspeakers with an error of 2.12±0.42 m. Given the large range of distances between duetting birds, this relatively low level of error demonstrates that the acoustic location system is a useful tool for studying avian duets. Location error was influenced partly by the difficulties inherent in collecting high accuracy GPS coordinates of microphone positions underneath a lush tropical canopy, and partly by the complicating influence of irregular topography and thick vegetation on sound transmission. PMID:16708941

  5. SU-F-P-42: “To Navigate, Or Not to Navigate: HDR BT in Recurrent Spine Lesions”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voros, L; Cohen, G; Zaider, M

    Purpose: We compare the accuracy of HDR catheter placement for paraspinal lesions using O-arm CBCT imaging combined with StealthStation navigation and traditional fluoroscopically guided catheter placement. Methods: CT and MRI scans were acquired pre-treatment to outline the lesions and design treatment plans (pre-plans) to meet dosimetric constrains. The pre-planned catheter trajectories were transferred into the StealthStation Navigation system prior to the surgery. The StealthStation is an infra red (IR) optical navigation system used for guidance of surgical instruments. An intraoperative CBCT scan (O-arm) was acquired with reference IR optical fiducials anchored onto the patient and registered with the preplan imagemore » study to guide surgical instruments in relation to the patients’ anatomy and to place the brachytherapy catheters along the pre-planned trajectories. The final treatment plan was generated based on a 2nd intraoperative CBCT scan reflecting achieved implant geometry. The 2nd CBCT was later registered with the initial CT scan to compare the preplanned dwell positions with actual dwell positions (catheter placements). Similar workflow was used in placement of 8 catheters (1 patient) without navigation, but under fluoroscopy guidance in an interventional radiology suite. Results: A total of 18 catheters (3 patients) were placed using navigation assisted surgery. Average displacement of 0.66 cm (STD=0.37cm) was observed between the pre-plan source positions and actual source positions in the 3 dimensional space. This translates into an average 0.38 cm positioning error in one direction including registration errors, digitization errors, and the surgeons ability to follow the planned trajectory. In comparison, average displacement of non-navigated catheters was 0.50 cm (STD=0.22cm). Conclusion: Spinal lesion HDR brachytherapy planning is a difficult task. Catheter placement has a direct impact on target coverage and dose to critical structures. While limited to a handful of patients, our experience shows navigation and fluoroscopy guided placement yield similar results.« less

  6. An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor.

    PubMed

    Xu, He; Ding, Ye; Li, Peng; Wang, Ruchuan; Li, Yizhu

    2017-08-05

    The Global Positioning System (GPS) is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID), etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS) indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K -Nearest Neighbor (BKNN). The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.

  7. Sensitivity of mesoscale-model forecast skill to some initial-data characteristics, data density, data position, analysis procedure and measurement error

    NASA Technical Reports Server (NTRS)

    Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.

    1989-01-01

    The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.

  8. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  9. The effect of timing errors in optical digital systems.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1972-01-01

    The use of digital transmission with narrow light pulses appears attractive for data communications, but carries with it a stringent requirement on system bit timing. The effects of imperfect timing in direct-detection (noncoherent) optical binary systems are investigated using both pulse-position modulation and on-off keying for bit transmission. Particular emphasis is placed on specification of timing accuracy and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors from which average error probabilities can be computed for specific synchronization methods. Of significance is the presence of a residual or irreducible error probability in both systems, due entirely to the timing system, which cannot be overcome by the data channel.

  10. Estimating Relative Positions of Outer-Space Structures

    NASA Technical Reports Server (NTRS)

    Balian, Harry; Breckenridge, William; Brugarolas, Paul

    2009-01-01

    A computer program estimates the relative position and orientation of two structures from measurements, made by use of electronic cameras and laser range finders on one structure, of distances and angular positions of fiducial objects on the other structure. The program was written specifically for use in determining errors in the alignment of large structures deployed in outer space from a space shuttle. The program is based partly on equations for transformations among the various coordinate systems involved in the measurements and on equations that account for errors in the transformation operators. It computes a least-squares estimate of the relative position and orientation. Sequential least-squares estimates, acquired at a measurement rate of 4 Hz, are averaged by passing them through a fourth-order Butterworth filter. The program is executed in a computer aboard the space shuttle, and its position and orientation estimates are displayed to astronauts on a graphical user interface.

  11. Image stretching on a curved surface to improve satellite gridding

    NASA Technical Reports Server (NTRS)

    Ormsby, J. P.

    1975-01-01

    A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.

  12. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Rottmann, J; Berbeco, R

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less

  13. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less

  14. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  15. Global Application of TaiWan Ionospheric Model to Single-Frequency GPS Positioning

    NASA Astrophysics Data System (ADS)

    Macalalad, E.; Tsai, L. C.; Wu, J.

    2012-04-01

    Ionospheric delay is one the major sources of error in GPS positioning and navigation. This error in both pseudorange and phase ranges vary depending on the location of observation, local time, season, solar cycle and geomagnetic activity. For single-frequency receivers, this delay is usually removed using ionospheric models. Two of them are the Klobuchar, or broadcast, model and the global ionosphere map (GIM) provided by the International GNSS Service (IGS). In this paper, a three dimensional ionospheric electron (ne) density model derived from FormoSat3/COSMIC GPS Radio Occultation measurements, called the TaiWan Ionosphere Model, is used. It was used to calculate the slant total electron content (STEC) between receiver and GPS satellites to correct the pseudorange single-frequency observations. The corrected pseudorange for every epoch was used to determine a more accurate position of the receiver. Observations were made in July 2, 2011(Kp index = 0-2) in five randomly selected sites across the globe, four of which are IGS stations (station ID: cnmr, coso, irkj and morp) while the other is a low-cost single-frequency receiver located in Chungli City, Taiwan (ID: isls). It was illustrated that TEC maps generated using TWIM exhibited a detailed structure of the ionosphere, whereas Klobuchar and GIM only provided the basic diurnal and geographic features of the ionosphere. Also, it was shown that for single-frequency static point positioning TWIM provides more accurate and more precise positioning than the Klobuchar and GIM models for all stations. The average %error of the corrections made by Klobuchar, GIM and TWIM in DRMS are 3.88%, 0.78% and 17.45%, respectively. While the average %error in VRMS for Klobuchar, GIM and TWIM are 53.55%, 62.09%, 66.02%, respectively. This shows the capability of TWIM to provide a good global 3-dimensional ionospheric model.

  16. Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication

    NASA Astrophysics Data System (ADS)

    Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei

    2018-01-01

    This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.

  17. Sensitivity of an Elekta iView GT a-Si EPID model to delivery errors for pre-treatment verification of IMRT fields.

    PubMed

    Herwiningsih, Sri; Hanlon, Peta; Fielding, Andrew

    2014-12-01

    A Monte Carlo model of an Elekta iViewGT amorphous silicon electronic portal imaging device (a-Si EPID) has been validated for pre-treatment verification of clinical IMRT treatment plans. The simulations involved the use of the BEAMnrc and DOSXYZnrc Monte Carlo codes to predict the response of the iViewGT a-Si EPID model. The predicted EPID images were compared to the measured images obtained from the experiment. The measured EPID images were obtained by delivering a photon beam from an Elekta Synergy linac to the Elekta iViewGT a-Si EPID. The a-Si EPID was used with no additional build-up material. Frame averaged EPID images were acquired and processed using in-house software. The agreement between the predicted and measured images was analyzed using the gamma analysis technique with acceptance criteria of 3 %/3 mm. The results show that the predicted EPID images for four clinical IMRT treatment plans have a good agreement with the measured EPID signal. Three prostate IMRT plans were found to have an average gamma pass rate of more than 95.0 % and a spinal IMRT plan has the average gamma pass rate of 94.3 %. During the period of performing this work a routine MLC calibration was performed and one of the IMRT treatments re-measured with the EPID. A change in the gamma pass rate for one field was observed. This was the motivation for a series of experiments to investigate the sensitivity of the method by introducing delivery errors, MLC position and dosimetric overshoot, into the simulated EPID images. The method was found to be sensitive to 1 mm leaf position errors and 10 % overshoot errors.

  18. Analyzing false positives of four questions in the Force Concept Inventory

    NASA Astrophysics Data System (ADS)

    Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki

    2018-06-01

    In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a given FCI question is a false positive using subquestions. In addition to the 30 original questions, subquestions were introduced for Q.5, Q.6, Q.7, and Q.16. This modified version of the FCI was administered to 1145 university students in Japan from 2015 to 2017. In this paper, we discuss our finding that the systematic errors of Q.6, Q.7, and Q.16 are much larger than that of Q.5 for students with mid-level FCI scores. Furthermore, we find that, averaged over the data sample, the sum of the false positives from Q.5, Q.6, Q.7, and Q.16 is about 10% of the FCI score of a midlevel student.

  19. Using a motion capture system for spatial localization of EEG electrodes

    PubMed Central

    Reis, Pedro M. R.; Lochmann, Matthias

    2015-01-01

    Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468

  20. Deformation of angle profiles in forward kinematics for nullifying end-point offset while preserving movement properties.

    PubMed

    Zhang, Xudong

    2002-10-01

    This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.

  1. Wisdom in Medicine: What Helps Physicians After a Medical Error?

    PubMed

    Plews-Ogan, Margaret; May, Natalie; Owens, Justine; Ardelt, Monika; Shapiro, Jo; Bell, Sigall K

    2016-02-01

    Confronting medical error openly is critical to organizational learning, but less is known about what helps individual clinicians learn and adapt positively after making a harmful mistake. Understanding what factors help doctors gain wisdom can inform educational and peer support programs, and may facilitate the development of specific tools to assist doctors after harmful errors occur. Using "posttraumatic growth" as a model, the authors conducted semistructured interviews (2009-2011) with 61 physicians who had made a serious medical error. Interviews were recorded, professionally transcribed, and coded by two study team members (kappa 0.8) using principles of grounded theory and NVivo software. Coders also scored interviewees as wisdom exemplars or nonexemplars based on Ardelt's three-dimensional wisdom model. Of the 61 physicians interviewed, 33 (54%) were male, and on average, eight years had elapsed since the error. Wisdom exemplars were more likely to report disclosing the error to the patient/family (69%) than nonexemplars (38%); P < .03. Fewer than 10% of all participants reported receiving disclosure training. Investigators identified eight themes reflecting what helped physician wisdom exemplars cope positively: talking about it, disclosure and apology, forgiveness, a moral context, dealing with imperfection, learning/becoming an expert, preventing recurrences/improving teamwork, and helping others/teaching. The path forged by doctors who coped well with medical error highlights specific ways to help clinicians move through this difficult experience so that they avoid devastating professional outcomes and have the best chance of not just recovery but positive growth.

  2. The influence of non-rigid anatomy and patient positioning on endoscopy-CT image registration in the head and neck.

    PubMed

    Ingram, W Scott; Yang, Jinzhong; Wendt, Richard; Beadle, Beth M; Rao, Arvind; Wang, Xin A; Court, Laurence E

    2017-08-01

    To assess the influence of non-rigid anatomy and differences in patient positioning between CT acquisition and endoscopic examination on endoscopy-CT image registration in the head and neck. Radiotherapy planning CTs and 31-35 daily treatment-room CTs were acquired for nineteen patients. Diagnostic CTs were acquired for thirteen of the patients. The surfaces of the airways were segmented on all scans and triangular meshes were created to render virtual endoscopic images with a calibrated pinhole model of an endoscope. The virtual images were used to take projective measurements throughout the meshes, with reference measurements defined as those taken on the planning CTs and test measurements defined as those taken on the daily or diagnostic CTs. The influence of non-rigid anatomy was quantified by 3D distance errors between reference and test measurements on the daily CTs, and the influence of patient positioning was quantified by 3D distance errors between reference and test measurements on the diagnostic CTs. The daily CT measurements were also used to investigate the influences of camera-to-surface distance, surface angle, and the interval of time between scans. Average errors in the daily CTs were 0.36 ± 0.61 cm in the nasal cavity, 0.58 ± 0.83 cm in the naso- and oropharynx, and 0.47 ± 0.73 cm in the hypopharynx and larynx. Average errors in the diagnostic CTs in those regions were 0.52 ± 0.69 cm, 0.65 ± 0.84 cm, and 0.69 ± 0.90 cm, respectively. All CTs had errors heavily skewed towards 0, albeit with large outliers. Large camera-to-surface distances were found to increase the errors, but the angle at which the camera viewed the surface had no effect. The errors in the Day 1 and Day 15 CTs were found to be significantly smaller than those in the Day 30 CTs (P < 0.05). Inconsistencies of patient positioning have a larger influence than non-rigid anatomy on projective measurement errors. In general, these errors are largest when the camera is in the superior pharynx, where it sees large distances and a lot of muscle motion. The errors are larger when the interval of time between CT acquisitions is longer, which suggests that the interval of time between the CT acquisition and the endoscopic examination should be kept short. The median errors found in this study are comparable to acceptable levels of uncertainty in deformable CT registration. Large errors are possible even when image alignment is very good, indicating that projective measurements must be made carefully to avoid these outliers. © 2017 American Association of Physicists in Medicine.

  3. Development of multiple-eye PIV using mirror array

    NASA Astrophysics Data System (ADS)

    Maekawa, Akiyoshi; Sakakibara, Jun

    2018-06-01

    In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of  ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .

  4. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    PubMed Central

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886

  5. Long-term prediction of emergency department revenue and visitor volume using autoregressive integrated moving average model.

    PubMed

    Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi

    2011-01-01

    This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  6. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ming; Cygler,

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less

  7. Using the Coronal Evolution to Successfully Forward Model CMEs' In Situ Magnetic Profiles

    NASA Astrophysics Data System (ADS)

    Kay, C.; Gopalswamy, N.

    2017-12-01

    Predicting the effects of a coronal mass ejection (CME) impact requires knowing if impact will occur, which part of the CME impacts, and its magnetic properties. We explore the relation between CME deflections and rotations, which change the position and orientation of a CME, and the resulting magnetic profiles at 1 AU. For 45 STEREO-era, Earth-impacting CMEs, we determine the solar source of each CME, reconstruct its coronal position and orientation, and perform a ForeCAT (Forecasting a CME's Altered Trajectory) simulation of the coronal deflection and rotation. From the reconstructed and modeled CME deflections and rotations, we determine the solar cycle variation and correlations with CME properties. We assume no evolution between the outer corona and 1 AU and use the ForeCAT results to drive the ForeCAT In situ Data Observer (FIDO) in situ magnetic field model, allowing for comparisons with ACE and Wind observations. We do not attempt to reproduce the arrival time. On average FIDO reproduces the in situ magnetic field for each vector component with an error equivalent to 35% of the average total magnetic field strength when the total modeled magnetic field is scaled to match the average observed value. Random walk best fits distinguish between ForeCAT's ability to determine FIDO's input parameters and the limitations of the simple flux rope model. These best fits reduce the average error to 30%. The FIDO results are sensitive to changes of order a degree in the CME latitude, longitude, and tilt, suggesting that accurate space weather predictions require accurate measurements of a CME's position and orientation.

  8. Is ExacTrac x-ray system an alternative to CBCT for positioning patients with head and neck cancers?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemente, Stefania; Chiumento, Costanza; Fiorentino, Alba

    Purpose: To evaluate the usefulness of a six-degrees-of freedom (6D) correction using ExacTrac robotics system in patients with head-and-neck (HN) cancer receiving radiation therapy.Methods: Local setup accuracy was analyzed for 12 patients undergoing intensity-modulated radiation therapy (IMRT). Patient position was imaged daily upon two different protocols, cone-beam computed tomography (CBCT), and ExacTrac (ET) images correction. Setup data from either approach were compared in terms of both residual errors after correction and punctual displacement of selected regions of interest (Mandible, C2, and C6 vertebral bodies).Results: On average, both protocols achieved reasonably low residual errors after initial correction. The observed differences inmore » shift vectors between the two protocols showed that CBCT tends to weight more C2 and C6 at the expense of the mandible, while ET tends to average more differences among the different ROIs.Conclusions: CBCT, even without 6D correction capabilities, seems preferable to ET for better consistent alignment and the capability to see soft tissues. Therefore, in our experience, CBCT represents a benchmark for positioning head and neck cancer patients.« less

  9. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  10. SU-E-T-261: Plan Quality Assurance of VMAT Using Fluence Images Reconstituted From Log-Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Shimizu, E; Matsunaga, K

    2014-06-01

    Purpose: A successful VMAT plan delivery includes precise modulations of dose rate, gantry rotational and multi-leaf collimator (MLC) shapes. One of the main problem in the plan quality assurance is dosimetric errors associated with leaf-positional errors are difficult to analyze because they vary with MU delivered and leaf number. In this study, we calculated integrated fluence error image (IFEI) from log-files and evaluated plan quality in the area of all and individual MLC leaves scanned. Methods: The log-file reported the expected and actual position for inner 20 MLC leaves and the dose fraction every 0.25 seconds during prostate VMAT onmore » Elekta Synergy. These data were imported to in-house software that developed to calculate expected and actual fluence images from the difference of opposing leaf trajectories and dose fraction at each time. The IFEI was obtained by adding all of the absolute value of the difference between expected and actual fluence images corresponding. Results: In the area all MLC leaves scanned in the IFEI, the average and root mean square (rms) were 2.5 and 3.6 MU, the area of errors below 10, 5 and 3 MU were 98.5, 86.7 and 68.1 %, the 95 % of area was covered with less than error of 7.1 MU. In the area individual MLC leaves scanned in the IFEI, the average and rms value were 2.1 – 3.0 and 3.1 – 4.0 MU, the area of errors below 10, 5 and 3 MU were 97.6 – 99.5, 81.7 – 89.5 and 51.2 – 72.8 %, the 95 % of area was covered with less than error of 6.6 – 8.2 MU. Conclusion: The analysis of the IFEI reconstituted from log-file was provided detailed information about the delivery in the area of all and individual MLC leaves scanned.« less

  11. Three-dimensional planning and use of patient-specific guides improve glenoid component position: an in vitro study.

    PubMed

    Walch, Gilles; Vezeridis, Peter S; Boileau, Pascal; Deransart, Pierric; Chaoui, Jean

    2015-02-01

    Glenoid component positioning is a key factor for success in total shoulder arthroplasty. Three-dimensional (3D) measurements of glenoid retroversion, inclination, and humeral head subluxation are helpful tools for preoperative planning. The purpose of this study was to assess the reliability and precision of a novel surgical method for placing the glenoid component with use of patient-specific templates created by preoperative surgical planning and 3D modeling. A preoperative computed tomography examination of cadaveric scapulae (N = 18) was performed. The glenoid implants were virtually placed, and patient-specific guides were created to direct the guide pin into the desired orientation and position in the glenoid. The 3D orientation and position of the guide pin were evaluated by performing a postoperative computed tomography scan for each scapula. The differences between the preoperative planning and the achieved result were analyzed. The mean error in 3D orientation of the guide pin was 2.39°, the mean entry point position error was 1.05 mm, and the mean inclination angle error was 1.42°. The average error in the version angle was 1.64°. There were no technical difficulties or complications related to use of patient-specific guides for guide pin placement. Quantitative analysis of guide pin positioning demonstrated a good correlation between preoperative planning and the achieved position of the guide pin. This study demonstrates the reliability and precision of preoperative planning software and patient-specific guides for glenoid component placement in total shoulder arthroplasty. Copyright © 2015. Published by Elsevier Inc.

  12. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  13. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.

    PubMed

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-12-11

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  14. TU-F-17A-05: Calculating Tumor Trajectory and Dose-Of-The-Day for Highly Mobile Tumors Using Cone-Beam CT Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, B; Miften, M

    2014-06-15

    Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. We developed a method using these projections to determine the trajectory and dose of highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, where the trajectory mimicked a lung tumor with high amplitude (2.4 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each projection. A Gaussian probability density function for tumor position was calculated which best fit the observed trajectory ofmore » the BB in the imager geometry. Two methods to improve the accuracy of tumor track reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation, and second, using the Monte Carlo method to sample the estimated Gaussian tumor position distribution. 15 clinically-drawn abdominal/lung CTV volumes were used to evaluate the accuracy of the proposed methods by comparing the known and calculated BB trajectories. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square (RMS) trajectory errors were lower than 5% of marker amplitude. Use of respiratory phase information decreased RMS errors by 30%, and decreased the fraction of large errors (>3 mm) by half. Mean dose to the clinical volumes was calculated with an average error of 0.1% and average absolute error of 0.3%. Dosimetric parameters D90/D95 were determined within 0.5% of maximum dose. Monte-Carlo sampling increased RMS trajectory and dosimetric errors slightly, but prevented over-estimation of dose in trajectories with high noise. Conclusions: Tumor trajectory and dose-of-the-day were accurately calculated using CBCT projections. This technique provides a widely-available method to evaluate highly-mobile tumors, and could facilitate better strategies to mitigate or compensate for motion during SBRT.« less

  15. A Noninvasive Body Setup Method for Radiotherapy by Using a Multimodal Image Fusion Technique

    PubMed Central

    Zhang, Jie; Chen, Yunxia; Wang, Chenchen; Chu, Kaiyue; Jin, Jianhua; Huang, Xiaolin; Guan, Yue; Li, Weifeng

    2017-01-01

    Purpose: To minimize the mismatch error between patient surface and immobilization system for tumor location by a noninvasive patient setup method. Materials and Methods: The method, based on a point set registration, proposes a shift for patient positioning by integrating information of the computed tomography scans and that of optical surface landmarks. An evaluation of the method included 3 areas: (1) a validation on a phantom by estimating 100 known mismatch errors between patient surface and immobilization system. (2) Five patients with pelvic tumors were considered. The tumor location errors of the method were measured using the difference between the proposal shift of cone-beam computed tomography and that of our method. (3) The collected setup data from the evaluation of patients were compared with the published performance data of other 2 similar systems. Results: The phantom verification results showed that the method was capable of estimating mismatch error between patient surface and immobilization system in a precision of <0.22 mm. For the pelvic tumor, the method had an average tumor location error of 1.303, 2.602, and 1.684 mm in left–right, anterior–posterior, and superior–inferior directions, respectively. The performance comparison with other 2 similar systems suggested that the method had a better positioning accuracy for pelvic tumor location. Conclusion: By effectively decreasing an interfraction uncertainty source (mismatch error between patient surface and immobilization system) in radiotherapy, the method can improve patient positioning precision for pelvic tumor. PMID:29333959

  16. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  17. Unicompartmental knee arthroplasty: is robotic technology more accurate than conventional technique?

    PubMed

    Citak, Mustafa; Suero, Eduardo M; Citak, Musa; Dunbar, Nicholas J; Branch, Sharon H; Conditt, Michael A; Banks, Scott A; Pearle, Andrew D

    2013-08-01

    Robotic-assisted unicompartmental knee arthroplasty (UKA) with rigid bone fixation "can significantly improve implant placement and leg alignment. The aim of this cadaveric study was to determine whether the use of robotic systems with dynamic bone tracking would provide more accurate UKA implant positioning compared to the conventional manual technique. Three-dimensional CT-based preoperative plans were created to determine the desired position and orientation for the tibial and femoral components. For each pair of cadaver knees, UKA was performed using traditional instrumentation on the left side and using a haptic robotic system on the right side. Postoperative CT scans were obtained and 3D-to-3D iterative closest point registration was performed. Implant position and orientation were compared to the preoperative plan. Surgical RMS errors for femoral component placement were within 1.9 mm and 3.7° in all directions of the planned implant position for the robotic group, while RMS errors for the manual group were within 5.4mm and 10.2°. Average RMS errors for tibial component placement were within 1.4mm and 5.0° in all directions for the robotic group; while, for the manual group, RMS errors were within 5.7 mm and 19.2°. UKA was more precise using a semiactive robotic system with dynamic bone tracking technology compared to the manual technique. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. SU-F-J-42: Comparison of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Cranial Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Shi, W; Andrews, D

    2016-06-15

    Purpose: To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac x-ray imaging systems for cranial radiotherapy. Method: Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (Version 2.5), which is integrated with a BrainLab ExacTrac imaging system (Version 6.1.1). The phantom study was based on a Rando head phantom, which was designed to evaluate isocenter-location dependence of the image registrations. Ten isocenters were selected at various locations in the phantom, which represented clinical treatment sites. CBCT and ExacTrac x-ray images were taken when the phantom was located at each isocenter. The patientmore » study included thirteen patients. CBCT and ExacTrac x-ray images were taken at each patient’s treatment position. Six-dimensional image registrations were performed on CBCT and ExacTrac, and residual errors calculated from CBCT and ExacTrac were compared. Results: In the phantom study, the average residual-error differences between CBCT and ExacTrac image registrations were: 0.16±0.10 mm, 0.35±0.20 mm, and 0.21±0.15 mm, in the vertical, longitudinal, and lateral directions, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.36±0.11 degree, 0.14±0.10 degree, and 0.12±0.10 degree, respectively. In the patient study, the average residual-error differences in the vertical, longitudinal, and lateral directions were: 0.13±0.13 mm, 0.37±0.21 mm, 0.22±0.17 mm, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.30±0.10 degree, 0.18±0.11 degree, and 0.22±0.13 degree, respectively. Larger residual-error differences (up to 0.79 mm) were observed in the longitudinal direction in the phantom and patient studies where isocenters were located in or close to frontal lobes, i.e., located superficially. Conclusion: Overall, the average residual-error differences were within 0.4 mm in the translational directions and were within 0.4 degree in the rotational directions.« less

  19. Measuring Scale Errors in a Laser Tracker’s Horizontal Angle Encoder Through Simple Length Measurement and Two-Face System Tests

    PubMed Central

    Muralikrishnan, B.; Blackburn, C.; Sawyer, D.; Phillips, S.; Bridges, R.

    2010-01-01

    We describe a method to estimate the scale errors in the horizontal angle encoder of a laser tracker in this paper. The method does not require expensive instrumentation such as a rotary stage or even a calibrated artifact. An uncalibrated but stable length is realized between two targets mounted on stands that are at tracker height. The tracker measures the distance between these two targets from different azimuthal positions (say, in intervals of 20° over 360°). Each target is measured in both front face and back face. Low order harmonic scale errors can be estimated from this data and may then be used to correct the encoder’s error map to improve the tracker’s angle measurement accuracy. We have demonstrated this for the second order harmonic in this paper. It is important to compensate for even order harmonics as their influence cannot be removed by averaging front face and back face measurements whereas odd orders can be removed by averaging. We tested six trackers from three different manufacturers. Two of those trackers are newer models introduced at the time of writing of this paper. For older trackers from two manufacturers, the length errors in a 7.75 m horizontal length placed 7 m away from a tracker were of the order of ± 65 μm before correcting the error map. They reduced to less than ± 25 μm after correcting the error map for second order scale errors. Newer trackers from the same manufacturers did not show this error. An older tracker from a third manufacturer also did not show this error. PMID:27134789

  20. Reliability and measurement error of active knee extension range of motion in a modified slump test position: a pilot study.

    PubMed

    Tucker, Neil; Reid, Duncan; McNair, Peter

    2007-01-01

    The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20-49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2 degrees within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6 degrees and 3.3 degrees , respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system.

  1. Reliability and Measurement Error of Active Knee Extension Range of Motion in a Modified Slump Test Position: A Pilot Study

    PubMed Central

    Tucker, Neil; Reid, Duncan; McNair, Peter

    2007-01-01

    The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20–49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2° within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6° and 3.3°, respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system. PMID:19066666

  2. Twenty Golden Opportunities To Enhance Student Learning: Use Them or Lose Them.

    ERIC Educational Resources Information Center

    Sponder, Barry

    In an average classroom period, a teacher has twenty or more opportunities to interact with students and thereby influence learning outcomes. As such, teachers should use these opportunities to reinforce instruction or give positive corrective feedback. Typical methods used in schools emphasize error correction at the expense of calling attention…

  3. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, J. P.; McNamara, J.; Yorke, E.

    2012-10-15

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged imagesmore » for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction, seven required a single correction, one required two corrections, and one required three corrections. Mean residual GTV deviation (3D distance) following GTV-based systematic correction (mean {+-} 1 standard deviation 4.8 {+-} 1.5 mm) is significantly lower than for systematic skeletal-based (6.5 {+-} 2.9 mm, p= 0.015), and weekly skeletal-based correction (7.2 {+-} 3.0 mm, p= 0.001), but is not significantly lower than daily skeletal-based correction (5.4 {+-} 2.6 mm, p= 0.34). In two cases, first-day CBCT images reveal tumor changes-one showing tumor growth, the other showing large tumor displacement-that are not readily observed in radiographs. Differences in computed GTV deviations between respiration-correlated and respiration-averaged images are 0.2 {+-} 1.8 mm in the superior-inferior direction and are of similar magnitude in the other directions. Conclusions: An off-line protocol to correct GTV-based systematic error in locally advanced lung tumor cases can be effective at reducing tumor deviations, although the findings need confirmation with larger patient statistics. In some cases, a single cone-beam CT can be useful for assessing tumor changes early in treatment, if more than a few days elapse between simulation and the start of treatment. Tumor deviations measured with respiration-averaged CT and CBCT images are consistent with those measured with respiration-correlated images; the respiration-averaged method is more easily implemented in the clinic.« less

  4. SU-E-J-94: Positioning Errors Resulting From Using Bony Anatomy Alignment for Treating SBRT Lung Tumor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frame, C; Ding, G

    Purpose: To quantify patient setups errors based on bony anatomy registration rather than 3D tumor alignment for SBRT lung treatments. Method: A retrospective study was performed for patients treated with lung SBRT and imaged with kV cone beam computed tomography (kV-CBCT) image-guidance. Daily CBCT images were registered to treatment planning CTs based on bony anatomy alignment and then inter-fraction tumor movement was evaluated by comparing shift in the tumor center in the medial-lateral, anterior-posterior, and superior-inferior directions. The PTV V100% was evaluated for each patient based on the average daily tumor displacement to assess the impact of the positioning errormore » on the target coverage when the registrations were based on bony anatomy. Of the 35 patients studied, 15 were free-breathing treatments, 10 used abdominal compression with a stereotactic body frame, and the remaining 10 were performed with BodyFIX vacuum bags. Results: For free-breathing treatments, the range of tumor displacement error is between 1–6 mm in the medial-lateral, 1–13 mm in the anterior-posterior, and 1–7 mm in the superior-inferior directions. These positioning errors lead to 6–22% underdose coverage for PTV - V100% . Patients treated with abdominal compression immobilization showed positional errors of 0–4mm mediallaterally, 0–3mm anterior-posteriorly, and 0–2 mm inferior-superiorly with PTV - V100% underdose ranging between 6–17%. For patients immobilized with the vacuum bags, the positional errors were found to be 0–1 mm medial-laterally, 0–1mm anterior-posteriorly, and 0–2 mm inferior-superiorly with PTV - V100% under dose ranging between 5–6% only. Conclusion: It is necessary to align the tumor target by using 3D image guidance to ensure adequate tumor coverage before performing SBRT lung treatments. The BodyFIX vacuum bag immobilization method has the least positioning errors among the three methods studied when bony anatomy is used for registration.« less

  5. Illusory conjunctions reflect the time course of the attentional blink.

    PubMed

    Botella, Juan; Privado, Jesús; de Liaño, Beatriz Gil-Gómez; Suero, Manuel

    2011-07-01

    Illusory conjunctions in the time domain are binding errors for features from stimuli presented sequentially but in the same spatial position. A similar experimental paradigm is employed for the attentional blink (AB), an impairment of performance for the second of two targets when it is presented 200-500 msec after the first target. The analysis of errors along the time course of the AB allows the testing of models of illusory conjunctions. In an experiment, observers identified one (control condition) or two (experimental condition) letters in a specified color, so that illusory conjunctions in each response could be linked to specific positions in the series. Two items in the target colors (red and white, embedded in distractors of different colors) were employed in four conditions defined according to whether both targets were in the same or different colors. Besides the U-shaped function for hits, the errors were analyzed by calculating several response parameters reflecting characteristics such as the average position of the responses or the attentional suppression during the blink. The several error parameters cluster in two time courses, as would be expected from prevailing models of the AB. Furthermore, the results match the predictions from Botella, Barriopedro, and Suero's (Journal of Experimental Psychology: Human Perception and Performance, 27, 1452-1467, 2001) model for illusory conjunctions.

  6. Optical Coherence Tomography Based Estimates of Crystalline Lens Volume, Equatorial Diameter, and Plane Position.

    PubMed

    Martinez-Enriquez, Eduardo; Sun, Mengchan; Velasco-Ocana, Miriam; Birkenfeld, Judith; Pérez-Merino, Pablo; Marcos, Susana

    2016-07-01

    Measurement of crystalline lens geometry in vivo is critical to optimize performance of state-of-the-art cataract surgery. We used custom-developed quantitative anterior segment optical coherence tomography (OCT) and developed dedicated algorithms to estimate lens volume (VOL), equatorial diameter (DIA), and equatorial plane position (EPP). The method was validated ex vivo in 27 human donor (19-71 years of age) lenses, which were imaged in three-dimensions by OCT. In vivo conditions were simulated assuming that only the information within a given pupil size (PS) was available. A parametric model was used to estimate the whole lens shape from PS-limited data. The accuracy of the estimated lens VOL, DIA, and EPP was evaluated by comparing estimates from the whole lens data and PS-limited data ex vivo. The method was demonstrated in vivo using 2 young eyes during accommodation and 2 cataract eyes. Crystalline lens VOL was estimated within 96% accuracy (average estimation error across lenses ± standard deviation: 9.30 ± 7.49 mm3). Average estimation errors in EPP were below 40 ± 32 μm, and below 0.26 ± 0.22 mm in DIA. Changes in lens VOL with accommodation were not statistically significant (2-way ANOVA, P = 0.35). In young eyes, DIA decreased and EPP increased statistically significantly with accommodation (P < 0.001) by 0.14 mm and 0.13 mm, respectively, on average across subjects. In cataract eyes, VOL = 205.5 mm3, DIA = 9.57 mm, and EPP = 2.15 mm on average. Quantitative OCT with dedicated image processing algorithms allows estimation of human crystalline lens volume, diameter, and equatorial lens position, as validated from ex vivo measurements, where entire lens images are available.

  7. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  8. Evaluation of genomic high-throughput sequencing data generated on Illumina HiSeq and Genome Analyzer systems

    PubMed Central

    2011-01-01

    Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484

  9. A Leapfrog Navigation System

    NASA Astrophysics Data System (ADS)

    Opshaug, Guttorm Ringstad

    There are times and places where conventional navigation systems, such as the Global Positioning System (GPS), are unavailable due to anything from temporary signal occultations to lack of navigation system infrastructure altogether. The goal of the Leapfrog Navigation System (LNS) is to provide localized positioning services for such cases. The concept behind leapfrog navigation is to advance a group of navigation units teamwise into an area of interest. In a practical 2-D case, leapfrogging assumes known initial positions of at least two currently stationary navigation units. Two or more mobile units can then start to advance into the area of interest. The positions of the mobiles are constantly being calculated based on cross-range distance measurements to the stationary units, as well as cross-ranges among the mobiles themselves. At some point the mobile units stop, and the stationary units are released to move. This second team of units (now mobile) can then overtake the first team (now stationary) and travel even further towards the common goal of the group. Since there always is one stationary team, the position of any unit can be referenced back to the initial positions. Thus, LNS provides absolute positioning. I developed navigation algorithms needed to solve leapfrog positions based on cross-range measurements. I used statistical tools to predict how position errors would grow as a function of navigation unit geometry, cross-range measurement accuracy and previous position errors. Using this knowledge I predicted that a 4-unit Leapfrog Navigation System using 100 m baselines and 200 m leap distances could travel almost 15 km before accumulating absolute position errors of 10 m (1sigma). Finally, I built a prototype leapfrog navigation system using 4 GPS transceiver ranging units. I placed the 4 units in the vertices a 10m x 10m square, and leapfrogged the group 20 meters forwards, and then back again (40 m total travel). Average horizontal RMS position errors never exceeded 16 cm during these field tests.

  10. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  11. SU-E-T-192: FMEA Severity Scores - Do We Really Know?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonigan, J; Johnson, J; Kry, S

    2014-06-01

    Purpose: Failure modes and effects analysis (FMEA) is a subjective risk mitigation technique that has not been applied to physics-specific quality management practices. There is a need for quantitative FMEA data as called for in the literature. This work focuses specifically on quantifying FMEA severity scores for physics components of IMRT delivery and comparing to subjective scores. Methods: Eleven physical failure modes (FMs) for head and neck IMRT dose calculation and delivery are examined near commonly accepted tolerance criteria levels. Phantom treatment planning studies and dosimetry measurements (requiring decommissioning in several cases) are performed to determine the magnitude of dosemore » delivery errors for the FMs (i.e., severity of the FM). Resultant quantitative severity scores are compared to FMEA scores obtained through an international survey and focus group studies. Results: Physical measurements for six FMs have resulted in significant PTV dose errors up to 4.3% as well as close to 1 mm significant distance-to-agreement error between PTV and OAR. Of the 129 survey responses, the vast majority of the responders used Varian machines with Pinnacle and Eclipse planning systems. The average years of experience was 17, yet familiarity with FMEA less than expected. Survey reports perception of dose delivery error magnitude varies widely, in some cases 50% difference in dose delivery error expected amongst respondents. Substantial variance is also seen for all FMs in occurrence, detectability, and severity scores assigned with average variance values of 5.5, 4.6, and 2.2, respectively. Survey shows for MLC positional FM(2mm) average of 7.6% dose error expected (range 0–50%) compared to 2% error seen in measurement. Analysis of ranking in survey, treatment planning studies, and quantitative value comparison will be presented. Conclusion: Resultant quantitative severity scores will expand the utility of FMEA for radiotherapy and verify accuracy of FMEA results compared to highly variable subjective scores.« less

  12. An improved ELF/VLF method for globally geolocating sprite-producing lightning

    NASA Astrophysics Data System (ADS)

    Price, Colin; Asfur, Mustafa; Lyons, Walter; Nelson, Thomas

    2002-02-01

    The majority of sprites, the most common of transient luminous events (TLEs) in the upper atmosphere, are associated with a sub-class of positive cloud-to-ground lightning flashes (+CGs) whose characteristics are slowly being revealed. These +CGs produce extremely low frequency (ELF) and very low frequency (VLF) radiation detectable at great distances from the parent thunderstorm. During the STEPS field program in the United States, ELF/VLF transients associated with sprites were detected in the Negev Desert, Israel, some 11,000 km away. Within a two-hour period on 4 July 2000, all of the sprites detected optically in the United States produced detectable ELF/VLF transients in Israel. All of these transients were of positive polarity (representing positive lightning). Using the VLF data to obtain the azimuth of the transients, and the ELF data to calculate the distance between the source and receiver, we remotely determined the position of the sprite-forming lightning with an average locational error of 184 km (error of 1.6%).

  13. Pennation angle dependency in skeletal muscle tissue doppler strain in dynamic contractions.

    PubMed

    Lindberg, Frida; Öhberg, Fredrik; Granåsen, Gabriel; Brodin, Lars-Åke; Grönlund, Christer

    2011-07-01

    Tissue velocity imaging (TVI) is a Doppler based ultrasound technique that can be used to study regional deformation in skeletal muscle tissue. The aim of this study was to develop a biomechanical model to describe the TVI strain's dependency on the pennation angle. We demonstrate its impact as the subsequent strain measurement error using dynamic elbow contractions from the medial and the lateral part of biceps brachii at two different loadings; 5% and 25% of maximum voluntary contraction (MVC). The estimated pennation angles were on average about 4° in extended position and increased to a maximal of 13° in flexed elbow position. The corresponding relative angular error spread from around 7% up to around 40%. To accurately apply TVI on skeletal muscles, the error due to angle changes should be compensated for. As a suggestion, this could be done according to the presented model. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  14. Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.

    PubMed

    Chen, Jing; Zhang, Yi; Xue, Wei

    2018-04-28

    In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.

  15. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  16. Effectiveness of Specimen Collection Technology in the Reduction of Collection Turnaround Time and Mislabeled Specimens in Emergency, Medical-Surgical, Critical Care, and Maternal Child Health Departments.

    PubMed

    Saathoff, April M; MacDonald, Ryan; Krenzischek, Erundina

    2018-03-01

    The objective of this study was to evaluate the impact of specimen collection technology implementation featuring computerized provider order entry, positive patient identification, bedside specimen label printing, and barcode scanning on the reduction of mislabeled specimens and collection turnaround times in the emergency, medical-surgical, critical care, and maternal child health departments at a community teaching hospital. A quantitative analysis of a nonrandomized, pre-post intervention study design evaluated the statistical significance of reduction of mislabeled specimen percentages and collection turnaround times affected by the implementation of specimen collection technology. Mislabeled specimen percentages in all areas decreased from an average of 0.020% preimplementation to an average of 0.003% postimplementation, with a P < .001. Collection turnaround times longer than 60 minutes decreased after the implementation of specimen collection technology by an average of 27%, with a P < .001. Specimen collection and identification errors are a significant problem in healthcare, contributing to incorrect diagnoses, delayed care, lack of essential treatments, and patient injury or death. Collection errors can also contribute to an increased length of stay, increased healthcare costs, and decreased patient satisfaction. Specimen collection technology has structures in place to prevent collection errors and improve the overall efficiency of the specimen collection process.

  17. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones

    PubMed Central

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-01-01

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447

  18. Passive quantum error correction of linear optics networks through error averaging

    NASA Astrophysics Data System (ADS)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  19. Observations of the star Cor Caroli at the Apple Valley Workshop 2016

    NASA Astrophysics Data System (ADS)

    Estrada, Reed; Boyd, Sidney; Estrada, Chris; Evans, Cody; Rhoades, Hannah; Rhoades, Mark; Rhoades, Trevor

    2017-06-01

    Using a 22-inch Newtonian Alt/Az telescope and Celestron Micro Guide eyepiece, students participating in a workshop observed the binary star Cor Caroli (STF 1692) and found a position angle of 231.0 degrees as well as an average separation of 18.7" This observation compared favorably with the 2015 Washington Double Star published position. This project was part of Mark Brewer's Apple Valley Double Star Workshop. The results were analyzed using bias and circle error probability calculations.

  20. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  1. A system to use electromagnetic tracking for the quality assurance of brachytherapy catheter digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.

    2014-10-15

    Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less

  2. Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias

    USGS Publications Warehouse

    Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.

    2016-01-01

    Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.

  3. Comparison of Online 6 Degree-of-Freedom Image Registration of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac X-Ray for Intracranial Radiosurgery.

    PubMed

    Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong

    2017-06-01

    The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.

  4. A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint

    PubMed Central

    Zou, Jiaheng

    2018-01-01

    With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m. PMID:29494542

  5. A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint.

    PubMed

    Wang, Yan; Li, Xin; Zou, Jiaheng

    2018-03-01

    With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.

  6. A bronchoscopic navigation system using bronchoscope center calibration for accurate registration of electromagnetic tracker and CT volume without markers.

    PubMed

    Luo, Xiongbiao

    2014-06-01

    Various bronchoscopic navigation systems are developed for diagnosis, staging, and treatment of lung and bronchus cancers. To construct electromagnetically navigated bronchoscopy systems, registration of preoperative images and an electromagnetic tracker must be performed. This paper proposes a new marker-free registration method, which uses the centerlines of the bronchial tree and the center of a bronchoscope tip where an electromagnetic sensor is attached, to align preoperative images and electromagnetic tracker systems. The chest computed tomography (CT) volume (preoperative images) was segmented to extract the bronchial centerlines. An electromagnetic sensor was fixed at the bronchoscope tip surface. A model was designed and printed using a 3D printer to calibrate the relationship between the fixed sensor and the bronchoscope tip center. For each sensor measurement that includes sensor position and orientation information, its corresponding bronchoscope tip center position was calculated. By minimizing the distance between each bronchoscope tip center position and the bronchial centerlines, the spatial alignment of the electromagnetic tracker system and the CT volume was determined. After obtaining the spatial alignment, an electromagnetic navigation bronchoscopy system was established to real-timely track or locate a bronchoscope inside the bronchial tree during bronchoscopic examinations. The electromagnetic navigation bronchoscopy system was validated on a dynamic bronchial phantom that can simulate respiratory motion with a breath rate range of 0-10 min(-1). The fiducial and target registration errors of this navigation system were evaluated. The average fiducial registration error was reduced from 8.7 to 6.6 mm. The average target registration error, which indicates all tracked or navigated bronchoscope position accuracy, was much reduced from 6.8 to 4.5 mm compared to previous registration methods. An electromagnetically navigated bronchoscopy system was constructed with accurate registration of an electromagnetic tracker and the CT volume on the basis of an improved marker-free registration approach that uses the bronchial centerlines and bronchoscope tip center information. The fiducial and target registration errors of our electromagnetic navigation system were about 6.6 and 4.5 mm in dynamic bronchial phantom validation.

  7. A bronchoscopic navigation system using bronchoscope center calibration for accurate registration of electromagnetic tracker and CT volume without markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xiongbiao, E-mail: xiongbiao.luo@gmail.com

    2014-06-15

    Purpose: Various bronchoscopic navigation systems are developed for diagnosis, staging, and treatment of lung and bronchus cancers. To construct electromagnetically navigated bronchoscopy systems, registration of preoperative images and an electromagnetic tracker must be performed. This paper proposes a new marker-free registration method, which uses the centerlines of the bronchial tree and the center of a bronchoscope tip where an electromagnetic sensor is attached, to align preoperative images and electromagnetic tracker systems. Methods: The chest computed tomography (CT) volume (preoperative images) was segmented to extract the bronchial centerlines. An electromagnetic sensor was fixed at the bronchoscope tip surface. A model wasmore » designed and printed using a 3D printer to calibrate the relationship between the fixed sensor and the bronchoscope tip center. For each sensor measurement that includes sensor position and orientation information, its corresponding bronchoscope tip center position was calculated. By minimizing the distance between each bronchoscope tip center position and the bronchial centerlines, the spatial alignment of the electromagnetic tracker system and the CT volume was determined. After obtaining the spatial alignment, an electromagnetic navigation bronchoscopy system was established to real-timely track or locate a bronchoscope inside the bronchial tree during bronchoscopic examinations. Results: The electromagnetic navigation bronchoscopy system was validated on a dynamic bronchial phantom that can simulate respiratory motion with a breath rate range of 0–10 min{sup −1}. The fiducial and target registration errors of this navigation system were evaluated. The average fiducial registration error was reduced from 8.7 to 6.6 mm. The average target registration error, which indicates all tracked or navigated bronchoscope position accuracy, was much reduced from 6.8 to 4.5 mm compared to previous registration methods. Conclusions: An electromagnetically navigated bronchoscopy system was constructed with accurate registration of an electromagnetic tracker and the CT volume on the basis of an improved marker-free registration approach that uses the bronchial centerlines and bronchoscope tip center information. The fiducial and target registration errors of our electromagnetic navigation system were about 6.6 and 4.5 mm in dynamic bronchial phantom validation.« less

  8. Ground target geolocation based on digital elevation model for airborne wide-area reconnaissance system

    NASA Astrophysics Data System (ADS)

    Qiao, Chuan; Ding, Yalin; Xu, Yongsen; Xiu, Jihong

    2018-01-01

    To obtain the geographical position of the ground target accurately, a geolocation algorithm based on the digital elevation model (DEM) is developed for an airborne wide-area reconnaissance system. According to the platform position and attitude information measured by the airborne position and orientation system and the gimbal angles information from the encoder, the line-of-sight pointing vector in the Earth-centered Earth-fixed coordinate frame is solved by the homogeneous coordinate transformation. The target longitude and latitude can be solved with the elliptical Earth model and the global DEM. The influences of the systematic error and measurement error on ground target geolocation calculation accuracy are analyzed by the Monte Carlo method. The simulation results show that this algorithm can improve the geolocation accuracy of ground target in rough terrain area obviously. The geolocation accuracy of moving ground target can be improved by moving average filtering (MAF). The validity of the geolocation algorithm is verified by the flight test in which the plane flies at a geodetic height of 15,000 m and the outer gimbal angle is <47°. The geolocation root mean square error of the target trajectory is <45 and <7 m after MAF.

  9. Positioning accuracy during VMAT of gynecologic malignancies and the resulting dosimetric impact by a 6-degree-of-freedom couch in combination with daily kilovoltage cone beam computed tomography.

    PubMed

    Yao, Lihong; Zhu, Lihong; Wang, Junjie; Liu, Lu; Zhou, Shun; Jiang, ShuKun; Cao, Qianqian; Qu, Ang; Tian, Suqing

    2015-04-26

    To improve the delivery of radiotherapy in gynecologic malignancies and to minimize the irradiation of unaffected tissues by using daily kilovoltage cone beam computed tomography (kV-CBCT) to reduce setup errors. Thirteen patients with gynecologic cancers were treated with postoperative volumetric-modulated arc therapy (VMAT). All patients had a planning CT scan and daily CBCT during treatment. Automatic bone anatomy matching was used to determine initial inter-fraction positioning error. Positional correction on a six-degrees-of-freedom (6DoF) couch was followed by a second scan to calculate the residual inter-fraction error, and a post-treatment scan assessed intra-fraction motion. The margins of the planning target volume (MPTV) were calculated from these setup variations and the effect of margin size on normal tissue sparing was evaluated. In total, 573 CBCT scans were acquired. Mean absolute pre-/post-correction errors were obtained in all six planes. With 6DoF couch correction, the MPTV accounting for intra-fraction errors was reduced by 3.8-5.6 mm. This permitted a reduction in the maximum dose to the small intestine, bladder and femoral head (P=0.001, 0.035 and 0.032, respectively), the average dose to the rectum, small intestine, bladder and pelvic marrow (P=0.003, 0.000, 0.001 and 0.000, respectively) and markedly reduced irradiated normal tissue volumes. A 6DoF couch in combination with daily kV-CBCT can considerably improve positioning accuracy during VMAT treatment in gynecologic malignancies, reducing the MPTV. The reduced margin size permits improved normal tissue sparing and a smaller total irradiated volume.

  10. Drug Distribution. Part 1. Models to Predict Membrane Partitioning.

    PubMed

    Nagar, Swati; Korzekwa, Ken

    2017-03-01

    Tissue partitioning is an important component of drug distribution and half-life. Protein binding and lipid partitioning together determine drug distribution. Two structure-based models to predict partitioning into microsomal membranes are presented. An orientation-based model was developed using a membrane template and atom-based relative free energy functions to select drug conformations and orientations for neutral and basic drugs. The resulting model predicts the correct membrane positions for nine compounds tested, and predicts the membrane partitioning for n = 67 drugs with an average fold-error of 2.4. Next, a more facile descriptor-based model was developed for acids, neutrals and bases. This model considers the partitioning of neutral and ionized species at equilibrium, and can predict membrane partitioning with an average fold-error of 2.0 (n = 92 drugs). Together these models suggest that drug orientation is important for membrane partitioning and that membrane partitioning can be well predicted from physicochemical properties.

  11. Effect of eye position on saccades and neuronal responses to acoustic stimuli in the superior colliculus of the behaving cat.

    PubMed

    Populin, Luis C; Tollin, Daniel J; Yin, Tom C T

    2004-10-01

    We examined the motor error hypothesis of visual and auditory interaction in the superior colliculus (SC), first tested by Jay and Sparks in the monkey. We trained cats to direct their eyes to the location of acoustic sources and studied the effects of eye position on both the ability of cats to localize sounds and the auditory responses of SC neurons with the head restrained. Sound localization accuracy was generally not affected by initial eye position, i.e., accuracy was not proportionally affected by the deviation of the eyes from the primary position at the time of stimulus presentation, showing that eye position is taken into account when orienting to acoustic targets. The responses of most single SC neurons to acoustic stimuli in the intact cat were modulated by eye position in the direction consistent with the predictions of the "motor error" hypothesis, but the shift accounted for only two-thirds of the initial deviation of the eyes. However, when the average horizontal sound localization error, which was approximately 35% of the target amplitude, was taken into account, the magnitude of the horizontal shifts in the SC auditory receptive fields matched the observed behavior. The modulation by eye position was not due to concomitant movements of the external ears, as confirmed by recordings carried out after immobilizing the pinnae of one cat. However, the pattern of modulation after pinnae immobilization was inconsistent with the observations in the intact cat, suggesting that, in the intact animal, information about the position of the pinnae may be taken into account.

  12. Impaired limb position sense after stroke: a quantitative test for clinical use.

    PubMed

    Carey, L M; Oke, L E; Matyas, T A

    1996-12-01

    A quantitative measure of wrist position sense was developed to advance clinical measurement of proprioceptive limb sensibility after stroke. Test-retest reliability, normative standards, and ability to discriminate impaired and unimpaired performance were investigated. Retest reliability was assessed over three sessions, and a matched-pairs study compared stroke and unimpaired subjects. Both wrists were tested, in counterbalanced order. Patients were tested in hospital-based rehabilitation units. Reliability was investigated on a consecutive sample of 35 adult stroke patients with a range of proprioceptive discrimination abilities and no evidence of neglect. A consecutive sample of 50 stroke patients and convenience sample of 50 healthy volunteers, matched for age, sex, and hand dominance, were tested in the normative-discriminative study. Age and sex were representative of the adult stroke population. The test required matching of imposed wrist positions using a pointer aligned with the axis of movement and a protractor scale. The test was reliable (r = .88 and .92) and observed changes of 8 degrees can be interpreted, with 95% confidence, as genuine. Scores of healthy volunteers ranged from 3.1 degrees to 10.9 degrees average error. The criterion of impairment was conservatively defined as 11 degrees (+/-4.8 degrees) average error. Impaired and unimpaired performance were well differentiated. Clinicians can confidently and quantitatively sample one aspect of proprioceptive sensibility in stroke patients using the wrist position sense test. Development of tests on other joints using the present approach is supported by our findings.

  13. Kinematic parameter estimation using close range photogrammetry for sport applications

    NASA Astrophysics Data System (ADS)

    Magre Colorado, Luz Alejandra; Martínez Santos, Juan Carlos

    2015-12-01

    In this article, we show the development of a low-cost hardware/software system based on close range photogrammetry to track the movement of a person performing weightlifting. The goal is to reduce the costs to the trainers and athletes dedicated to this sport when it comes to analyze the performance of the sportsman and avoid injuries or accidents. We used a web-cam as the data acquisition hardware and develop the software stack in Processing using the OpenCV library. Our algorithm extracts size, position, velocity, and acceleration measurements of the bar along the course of the exercise. We present detailed characteristics of the system with their results in a controlled setting. The current work improves the detection and tracking capabilities from a previous version of this system by using HSV color model instead of RGB. Preliminary results show that the system is able to profile the movement of the bar as well as determine the size, position, velocity, and acceleration values of a marker/target in scene. The average error finding the size of object at four meters of distance is less than 4%, and the error of the acceleration value is 1.01% in average.

  14. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders.

    PubMed

    Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise

    2013-05-01

    To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.

  15. SU-F-J-55: Feasibility of Supraclavicular Field Treatment by Investigating Variation of Junction Position Between Breast Tangential and Supraclavicular Fields for Deep Inspiration Breath Hold (DIBH) Left Breast Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, H; Sarkar, V; Paxton, A

    Purpose: To explore the feasibility of supraclavicular field treatment by investigating the variation of junction position between tangential and supraclavicular fields during left breast radiation using DIBH technique. Methods: Six patients with left breast cancer treated using DIBH technique were included in this study. AlignRT system was used to track patient’s breast surface. During daily treatment, when the patient’s DIBH reached preset AlignRT tolerance of ±3mm for all principle directions (vertical, longitudinal, and lateral), the remaining longitudinal offset was recorded. The average with standard-deviation and the range of daily longitudinal offset for the entire treatment course were calculated for allmore » six patients (93 fractions totally). The ranges of average ± 1σ and 2σ were calculated, and they represent longitudinal field edge error with the confidence level of 68% and 95%. Based on these longitudinal errors, dose at junction between breast tangential and supraclavicular fields with variable gap/overlap sizes was calculated as a percentage of prescription (on a representative patient treatment plan). Results: The average of longitudinal offset for all patients is 0.16±1.32mm, and the range of longitudinal offset is −2.6 to 2.6mm. The range of longitudinal field edge error at 68% confidence level is −1.48 to 1.16mm, and at 95% confidence level is −2.80 to 2.48mm. With a 5mm and 1mm gap, the junction dose could be as low as 37.5% and 84.9% of prescription dose; with a 5mm and 1mm overlap, the junction dose could be as high as 169.3% and 117.6%. Conclusion: We observed longitudinal field edge error at 95% confidence level is about ±2.5mm, and the junction dose could reach 70% hot/cold between different DIBH. However, over the entire course of treatment, the average junction variation for all patients is within 0.2mm. The results from our study shows it is potentially feasible to treat supraclavicular field with breast tangents.« less

  16. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    PubMed

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  17. Interactions between moist heating and dynamics in atmospheric predictability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straus, D.M.; Huntley, M.A.

    1994-02-01

    The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less

  18. Local correction of quadrupole errors at LHC interaction regions using action and phase jump analysis on turn-by-turn beam position data

    NASA Astrophysics Data System (ADS)

    Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio

    2017-11-01

    This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.

  19. Automatic Tracking Algorithm in Coaxial Near-Infrared Laser Ablation Endoscope for Fetus Surgery

    NASA Astrophysics Data System (ADS)

    Hu, Yan; Yamanaka, Noriaki; Masamune, Ken

    2014-07-01

    This article reports a stable vessel object tracking method for the treatment of twin-to-twin transfusion syndrome based on our previous 2 DOF endoscope. During the treatment of laser coagulation, it is necessary to focus on the exact position of the target object, however it moves by the mother's respiratory motion and still remains a challenge to obtain and track the position precisely. In this article, an algorithm which uses features from accelerated segment test (FAST) to extract the features and optical flow as the object tracking method, is proposed to deal with above problem. Further, we experimentally simulate the movement due to the mother's respiration, and the results of position errors and similarity verify the effectiveness of the proposed tracking algorithm for laser ablation endoscopy in-vitro and under water considering two influential factors. At average, the errors are about 10 pixels and the similarity over 0.92 are obtained in the experiments.

  20. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders

    PubMed Central

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2012-01-01

    Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137

  1. Radio structure effects on the optical and radio representations of the ICRF

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.; da Silva Neto, D. N.; Assafin, M.; Vieira Martins, R.

    Silva Neto et al. (2002) show that comparing the ICRF Ext.1 sources standard radio position (Ma et al. 1998) against their optical counterpart position (Zacharias et al. 1999, Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9±1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio stucture. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.

  2. Observations of the Star Cor Caroli at the Apple Valley Workshop 2016 (Abstract)

    NASA Astrophysics Data System (ADS)

    Estrada, R.; Boyd, S.; Estrada, C.; Evans, C.; Rhoades, H.; Rhoades, M.; Rhoades, T.

    2017-12-01

    (Abstract only) Using a 22-inch Newtonian Alt/Az telescope and Celestron Micro Guide eyepiece, students participating in a workshop observed the binary star Cor Caroli (STF 1692; alpha CVn) and found a position angle of 231.0 degrees as well as an average separation of 18.7" This observation compared favorably with the 2015 Washington Double Star published position. This project was part of Mark Brewer's Apple Valley Double Star Workshop. The results were analyzed using bias and circle error probability calculations.

  3. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    NASA Astrophysics Data System (ADS)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  4. Collaborative Localization Algorithms for Wireless Sensor Networks with Reduced Localization Error

    PubMed Central

    Sahoo, Prasan Kumar; Hwang, I-Shyan

    2011-01-01

    Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region. PMID:22163738

  5. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1989-01-01

    An experimental technique is used to measure the structural power flow through an aircraft fuselage with the excitation near the wing attachment location. Because of the large number of measurements required to analyze the whole of an aircraft fuselage, it is necessary that a balance be achieved between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, the structural intensity vectors at locations distributed throughout the fuselage are measured. To minimize the errors associated with using a four transducers technique the measurement positions are selected away from bulkheads and stiffeners. Because four separate transducers are used, with each transducer having its own drive and conditioning amplifiers, phase errors are introduced in the measurements that can be much greater than the phase differences associated with the measurements. To minimize these phase errors two sets of measurements are taken for each position with the orientation of the transducers rotated by 180 deg and an average taken between the two sets of measurements. Results are presented and discussed.

  6. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion.

    PubMed

    Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D

    2013-07-01

    To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.

  7. Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario.

    PubMed

    Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao

    2016-11-22

    Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals' average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day's WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas.

  8. Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario

    PubMed Central

    Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao

    2016-01-01

    Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals’ average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day’s WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas. PMID:27879663

  9. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  11. Impacts of GNSS position offsets on global frame stability

    NASA Astrophysics Data System (ADS)

    Griffiths, Jake; Ray, Jim

    2015-04-01

    Positional offsets appear in Global Navigation Satellite System (GNSS) time series for a variety of reasons. Antenna or radome changes are the most common cause for these discontinuities. Many others are from earthquakes, receiver changes, and different anthropogenic modifications at or near the stations. Some jumps appear for unknown or undocumented reasons. Accurate determination of station velocities, and therefore geophysical parameters and terrestrial reference frames, requires that positional offsets be correctly found and compensated. Williams (2003) found that undetected offsets introduce a random walk error component in individual station time series. The topic of detecting positional offsets has received considerable attention in recent years (e.g., Detection of Offsets in GPS Experiment; DOGEx), and most research groups using GNSS have adopted a mix of manual and automated methods for finding them. The removal of a positional offset from a time series is usually handled by estimating the average station position on both sides of the discontinuity. Except for large earthquake events, the velocity is usually assumed constant and continuous across the positional jump. This approach is sufficient in the absence of time-correlated errors. However, GNSS time series contain periodic and power-law (flicker) errors. In this paper, we evaluate the impact to individual station results and the overall stability of the global reference frame from adding increasing numbers of positional discontinuities. We use the International GNSS Service (IGS) weekly SINEX files, and iteratively insert positional offset parameters. Each iteration includes a restacking of the modified SINEX files using the CATREF software from Institut National de l'Information Géographique et Forestière (IGN). Comparisons of successive stacked solutions are used to assess the impacts on the time series of x-pole and y-pole offsets, along with changes in regularized position and secular velocity for stations with more than 2.5 years of data. Our preliminary results indicate that the change in polar motion scatter is logarithmic with increasing numbers of discontinuities. The best-fit natural logarithm to the changes in scatter for x-pole has R2 = 0.58; the fit for the y-pole series has R2 = 0.99. From these empirical functions, we find that polar motion scatter increases from zero when the total rate of discontinuities exceeds 0.2 (x-pole) and 1.3 (y-pole) per station, on average (the IGS has 0.65 per station). Thus, the presence of position offsets in GNSS station time series is likely already a contributor to IGS polar motion inaccuracy and global frame instability. Impacts to station position and velocity estimates depend on noise features found in that station's positional time series. For instance, larger changes in velocity occur for stations with shorter and noisier data spans. This is because an added discontinuity parameter for an individual station time series can induce changes in average position on both sides of the break. We will expand on these results, and consider remaining questions about the role of velocity discontinuities and the effects caused by non-core reference frame stations.

  12. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  13. The problem of isotopic baseline: Reconstructing the diet and trophic position of fossil animals

    NASA Astrophysics Data System (ADS)

    Casey, Michelle M.; Post, David M.

    2011-05-01

    Stable isotope methods are powerful, frequently used tools which allow diet and trophic position reconstruction of organisms and the tracking of energy sources through ecosystems. The majority of ecosystems have multiple food sources which have distinct carbon and nitrogen isotopic signatures despite occupying a single trophic level. This difference in the starting isotopic composition of primary producers sets up an isotopic baseline that needs to be accounted for when calculating diet or trophic position using stable isotopic methods. This is particularly important when comparing animals from different regions or different times. Failure to do so can cause erroneous estimations of diet or trophic level, especially for organisms with mixed diets. The isotopic baseline is known to vary seasonally and in concert with a host of physical and chemical variables such as mean annual rainfall, soil maturity, and soil pH in terrestrial settings and lake size, depth, and distance from shore in aquatic settings. In the fossil record, the presence of shallowing upward suites of rock, or parasequences, will have a considerable impact on the isotopic baseline as basin size, depth and distance from shore change simultaneously with stratigraphic depth. For this reason, each stratigraphic level is likely to need an independent estimation of baseline even within a single outcrop. Very little is known about the scope of millennial or decadal variation in isotopic baseline. Without multi-year data on the nature of isotopic baseline variation, the impacts of time averaging on our ability to resolve trophic relationships in the fossil record will remain unclear. The use of a time averaged baseline will increase the amount of error surrounding diet and trophic position reconstructions. Where signal to noise ratios are low, due to low end member disparity (e.g., aquatic systems), or where the observed isotopic shift is small (≤ 1‰) the error introduced by time averaging may severely inhibit the scope of one's interpretations and limit the types of questions one can reliably answer. In situations with strong signal strength, resulting from high amounts of end member disparity (e.g., terrestrial settings), this additional error maybe surmountable. Baseline variation that is adequately characterized can be dealt with by applying multiple end-member mixing models.

  14. Detection of IMRT delivery errors based on a simple constancy check of transit dose by using an EPID

    NASA Astrophysics Data System (ADS)

    Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun

    2015-11-01

    Beam delivery errors during intensity modulated radiotherapy (IMRT) were detected based on a simple constancy check of the transit dose by using an electronic portal imaging device (EPID). Twenty-one IMRT plans were selected from various treatment sites, and the transit doses during treatment were measured by using an EPID. Transit doses were measured 11 times for each course of treatment, and the constancy check was based on gamma index (3%/3 mm) comparisons between a reference dose map (the first measured transit dose) and test dose maps (the following ten measured dose maps). In a simulation using an anthropomorphic phantom, the average passing rate of the tested transit dose was 100% for three representative treatment sites (head & neck, chest, and pelvis), indicating that IMRT was highly constant for normal beam delivery. The average passing rate of the transit dose for 1224 IMRT fields from 21 actual patients was 97.6% ± 2.5%, with the lower rate possibly being due to inaccuracies of patient positioning or anatomic changes. An EPIDbased simple constancy check may provide information about IMRT beam delivery errors during treatment.

  15. Accuracy of measurement in electrically evoked compound action potentials.

    PubMed

    Hey, Matthias; Müller-Deile, Joachim

    2015-01-15

    Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Effects of learning climate and registered nurse staffing on medication errors.

    PubMed

    Chang, Yunkyung; Mark, Barbara

    2011-01-01

    Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.

  17. Vehicle Position Estimation Based on Magnetic Markers: Enhanced Accuracy by Compensation of Time Delays.

    PubMed

    Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won

    2015-11-13

    The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test.

  18. Vehicle Position Estimation Based on Magnetic Markers: Enhanced Accuracy by Compensation of Time Delays

    PubMed Central

    Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won

    2015-01-01

    The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test. PMID:26580622

  19. Evidence of Non-Coincidence between Radio and Optical Positions of ICRF Sources.

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.; da Silva, D. N.; Assafin, M.; Vieira Martins, R.

    2003-11-01

    Silva Neto et al. (SNAAVM: 2002) show that comparing the ICRF Ext1 sources standard radio position (Ma et al., 1998) against their optical counterpart position(ZZHJVW: Zacharias et al., 1999; USNO A2.0: Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9 +/- 1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio structure. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.

  20. Comparing diagnostic tests on benefit-risk.

    PubMed

    Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott

    2016-01-01

    Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.

  1. SU-F-T-288: Impact of Trajectory Log Files for Clarkson-Based Independent Dose Verification of IMRT and VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, R; Kamima, T; Tachibana, H

    2016-06-15

    Purpose: To investigate the effect of the trajectory files from linear accelerator for Clarkson-based independent dose verification in IMRT and VMAT plans. Methods: A CT-based independent dose verification software (Simple MU Analysis: SMU, Triangle Products, Japan) with a Clarksonbased algorithm was modified to calculate dose using the trajectory log files. Eclipse with the three techniques of step and shoot (SS), sliding window (SW) and Rapid Arc (RA) was used as treatment planning system (TPS). In this study, clinically approved IMRT and VMAT plans for prostate and head and neck (HN) at two institutions were retrospectively analyzed to assess the dosemore » deviation between DICOM-RT plan (PL) and trajectory log file (TJ). An additional analysis was performed to evaluate MLC error detection capability of SMU when the trajectory log files was modified by adding systematic errors (0.2, 0.5, 1.0 mm) and random errors (5, 10, 30 mm) to actual MLC position. Results: The dose deviations for prostate and HN in the two sites were 0.0% and 0.0% in SS, 0.1±0.0%, 0.1±0.1% in SW and 0.6±0.5%, 0.7±0.9% in RA, respectively. The MLC error detection capability shows the plans for HN IMRT were the most sensitive and 0.2 mm of systematic error affected 0.7% dose deviation on average. Effect of the MLC random error did not affect dose error. Conclusion: The use of trajectory log files including actual information of MLC location, gantry angle, etc should be more effective for an independent verification. The tolerance level for the secondary check using the trajectory file may be similar to that of the verification using DICOM-RT plan file. From the view of the resolution of MLC positional error detection, the secondary check could detect the MLC position error corresponding to the treatment sites and techniques. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  2. Numerical calculation of listener-specific head-related transfer functions and sound localization: Microphone model and mesh discretization

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang

    2015-01-01

    Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020

  3. What triggers catch-up saccades during visual tracking?

    PubMed

    de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe

    2002-03-01

    When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).

  4. Early Steps in Automated Behavior Mapping via Indoor Sensors.

    PubMed

    Arsan, Taner; Kepez, Orcun

    2017-12-16

    Behavior mapping (BM) is a spatial data collection technique in which the locational and behavioral information of a user is noted on a plan layout of the studied environment. Among many indoor positioning technologies, we chose Wi-Fi, BLE beacon and ultra-wide band (UWB) sensor technologies for their popularity and investigated their applicability in BM. We tested three technologies for error ranges and found an average error of 1.39 m for Wi-Fi in a 36 m² test area (6 m × 6 m), 0.86 m for the BLE beacon in a 37.44 m² test area (9.6 m × 3.9 m) and 0.24 m for ultra-wide band sensors in a 36 m² test area (6 m × 6 m). We simulated the applicability of these error ranges for real-time locations by using a behavioral dataset collected from an active learning classroom. We used two UWB tags simultaneously by incorporating a custom-designed ceiling system in a new 39.76 m² test area (7.35 m × 5.41 m). We considered 26 observation points and collected data for 180 s for each point (total 4680) with an average error of 0.2072 m for 23 points inside the test area. Finally, we demonstrated the use of ultra-wide band sensor technology for BM.

  5. Early Steps in Automated Behavior Mapping via Indoor Sensors

    PubMed Central

    Arsan, Taner

    2017-01-01

    Behavior mapping (BM) is a spatial data collection technique in which the locational and behavioral information of a user is noted on a plan layout of the studied environment. Among many indoor positioning technologies, we chose Wi-Fi, BLE beacon and ultra-wide band (UWB) sensor technologies for their popularity and investigated their applicability in BM. We tested three technologies for error ranges and found an average error of 1.39 m for Wi-Fi in a 36 m2 test area (6 m × 6 m), 0.86 m for the BLE beacon in a 37.44 m2 test area (9.6 m × 3.9 m) and 0.24 m for ultra-wide band sensors in a 36 m2 test area (6 m × 6 m). We simulated the applicability of these error ranges for real-time locations by using a behavioral dataset collected from an active learning classroom. We used two UWB tags simultaneously by incorporating a custom-designed ceiling system in a new 39.76 m2 test area (7.35 m × 5.41 m). We considered 26 observation points and collected data for 180 s for each point (total 4680) with an average error of 0.2072 m for 23 points inside the test area. Finally, we demonstrated the use of ultra-wide band sensor technology for BM. PMID:29258178

  6. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  7. SU-F-BRA-01: A Procedure for the Fast Semi-Automatic Localization of Catheters Using An Electromagnetic Tracker (EMT) for Image-Guided Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Viswanathan, A; Cormack, R

    2015-06-15

    Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less

  8. Quantification of errors induced by temporal resolution on Lagrangian particles in an eddy-resolving model

    NASA Astrophysics Data System (ADS)

    Qin, Xuerong; van Sebille, Erik; Sen Gupta, Alexander

    2014-04-01

    Lagrangian particle tracking within ocean models is an important tool for the examination of ocean circulation, ventilation timescales and connectivity and is increasingly being used to understand ocean biogeochemistry. Lagrangian trajectories are obtained by advecting particles within velocity fields derived from hydrodynamic ocean models. For studies of ocean flows on scales ranging from mesoscale up to basin scales, the temporal resolution of the velocity fields should ideally not be more than a few days to capture the high frequency variability that is inherent in mesoscale features. However, in reality, the model output is often archived at much lower temporal resolutions. Here, we quantify the differences in the Lagrangian particle trajectories embedded in velocity fields of varying temporal resolution. Particles are advected from 3-day to 30-day averaged fields in a high-resolution global ocean circulation model. We also investigate whether adding lateral diffusion to the particle movement can compensate for the reduced temporal resolution. Trajectory errors reveal the expected degradation of accuracy in the trajectory positions when decreasing the temporal resolution of the velocity field. Divergence timescales associated with averaging velocity fields up to 30 days are faster than the intrinsic dispersion of the velocity fields but slower than the dispersion caused by the interannual variability of the velocity fields. In experiments focusing on the connectivity along major currents, including western boundary currents, the volume transport carried between two strategically placed sections tends to increase with increased temporal averaging. Simultaneously, the average travel times tend to decrease. Based on these two bulk measured diagnostics, Lagrangian experiments that use temporal averaging of up to nine days show no significant degradation in the flow characteristics for a set of six currents investigated in more detail. The addition of random-walk-style diffusion does not mitigate the errors introduced by temporal averaging for large-scale open ocean Lagrangian simulations.

  9. A Voluntary Breath-Hold Treatment Technique for the Left Breast With Unfavorable Cardiac Anatomy Using Surface Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gierga, David P., E-mail: dgierga@partners.org; Harvard Medical School, Boston, Massachusetts; Turcotte, Julie C.

    2012-12-01

    Purpose: Breath-hold (BH) treatments can be used to reduce cardiac dose for patients with left-sided breast cancer and unfavorable cardiac anatomy. A surface imaging technique was developed for accurate patient setup and reproducible real-time BH positioning. Methods and Materials: Three-dimensional surface images were obtained for 20 patients. Surface imaging was used to correct the daily setup for each patient. Initial setup data were recorded for 443 fractions and were analyzed to assess random and systematic errors. Real time monitoring was used to verify surface placement during BH. The radiation beam was not turned on if the BH position difference wasmore » greater than 5 mm. Real-time surface data were analyzed for 2398 BHs and 363 treatment fractions. The mean and maximum differences were calculated. The percentage of BHs greater than tolerance was calculated. Results: The mean shifts for initial patient setup were 2.0 mm, 1.2 mm, and 0.3 mm in the vertical, longitudinal, and lateral directions, respectively. The mean 3-dimensional vector shift was 7.8 mm. Random and systematic errors were less than 4 mm. Real-time surface monitoring data indicated that 22% of the BHs were outside the 5-mm tolerance (range, 7%-41%), and there was a correlation with breast volume. The mean difference between the treated and reference BH positions was 2 mm in each direction. For out-of-tolerance BHs, the average difference in the BH position was 6.3 mm, and the average maximum difference was 8.8 mm. Conclusions: Daily real-time surface imaging ensures accurate and reproducible positioning for BH treatment of left-sided breast cancer patients with unfavorable cardiac anatomy.« less

  10. Automatic segmentation of stereoelectroencephalography (SEEG) electrodes post-implantation considering bending.

    PubMed

    Granados, Alejandro; Vakharia, Vejay; Rodionov, Roman; Schweiger, Martin; Vos, Sjoerd B; O'Keeffe, Aidan G; Li, Kuo; Wu, Chengyuan; Miserocchi, Anna; McEvoy, Andrew W; Clarkson, Matthew J; Duncan, John S; Sparks, Rachel; Ourselin, Sébastien

    2018-06-01

    The accurate and automatic localisation of SEEG electrodes is crucial for determining the location of epileptic seizure onset. We propose an algorithm for the automatic segmentation of electrode bolts and contacts that accounts for electrode bending in relation to regional brain anatomy. Co-registered post-implantation CT, pre-implantation MRI, and brain parcellation images are used to create regions of interest to automatically segment bolts and contacts. Contact search strategy is based on the direction of the bolt with distance and angle constraints, in addition to post-processing steps that assign remaining contacts and predict contact position. We measured the accuracy of contact position, bolt angle, and anatomical region at the tip of the electrode in 23 post-SEEG cases comprising two different surgical approaches when placing a guiding stylet close to and far from target point. Local and global bending are computed when modelling electrodes as elastic rods. Our approach executed on average in 36.17 s with a sensitivity of 98.81% and a positive predictive value (PPV) of 95.01%. Compared to manual segmentation, the position of contacts had a mean absolute error of 0.38 mm and the mean bolt angle difference of [Formula: see text] resulted in a mean displacement error of 0.68 mm at the tip of the electrode. Anatomical regions at the tip of the electrode were in strong concordance with those selected manually by neurosurgeons, [Formula: see text], with average distance between regions of 0.82 mm when in disagreement. Our approach performed equally in two surgical approaches regardless of the amount of electrode bending. We present a method robust to electrode bending that can accurately segment contact positions and bolt orientation. The techniques presented in this paper will allow further characterisation of bending within different brain regions.

  11. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ono, Tomohiro; Miyabe, Yuki, E-mail: miyabe@kuhp.kyoto-u.ac.jp; Yamada, Masahiro

    Purpose: The Vero4DRT system has the capability for dynamic tumor-tracking (DTT) stereotactic irradiation using a unique gimbaled x-ray head. The purposes of this study were to develop DTT conformal arc irradiation and to estimate its geometric and dosimetric accuracy. Methods: The gimbaled x-ray head, supported on an O-ring gantry, was moved in the pan and tilt directions during O-ring gantry rotation. To evaluate the mechanical accuracy, the gimbaled x-ray head was moved during the gantry rotating according to input command signals without a target tracking, and a machine log analysis was performed. The difference between a command and a measuredmore » position was calculated as mechanical error. To evaluate beam-positioning accuracy, a moving phantom, which had a steel ball fixed at the center, was driven based on a sinusoidal wave (amplitude [A]: 20 mm, time period [T]: 4 s), a patient breathing motion with a regular pattern (A: 16 mm, average T: 4.5 s), and an irregular pattern (A: 7.2–23.0 mm, T: 2.3–10.0 s), and irradiated with DTT during gantry rotation. The beam-positioning error was evaluated as the difference between the centroid position of the irradiated field and the steel ball on images from an electronic portal imaging device. For dosimetric accuracy, dose distributions in static and moving targets were evaluated with DTT conformal arc irradiation. Results: The root mean squares (RMSs) of the mechanical error were up to 0.11 mm for pan motion and up to 0.14 mm for tilt motion. The RMSs of the beam-positioning error were within 0.23 mm for each pattern. The dose distribution in a moving phantom with tracking arc irradiation was in good agreement with that in static conditions. Conclusions: The gimbal positional accuracy was not degraded by gantry motion. As in the case of a fixed port, the Vero4DRT system showed adequate accuracy of DTT conformal arc irradiation.« less

  13. Spine stereotactic body radiotherapy utilizing cone-beam CT image-guidance with a robotic couch: intrafraction motion analysis accounting for all six degrees of freedom.

    PubMed

    Hyde, Derek; Lochray, Fiona; Korol, Renee; Davidson, Melanie; Wong, C Shun; Ma, Lijun; Sahgal, Arjun

    2012-03-01

    To evaluate the residual setup error and intrafraction motion following kilovoltage cone-beam CT (CBCT) image guidance, for immobilized spine stereotactic body radiotherapy (SBRT) patients, with positioning corrected for in all six degrees of freedom. Analysis is based on 42 consecutive patients (48 thoracic and/or lumbar metastases) treated with a total of 106 fractions and 307 image registrations. Following initial setup, a CBCT was acquired for patient alignment and a pretreatment CBCT taken to verify shifts and determine the residual setup error, followed by a midtreatment and posttreatment CBCT image. For 13 single-fraction SBRT patients, two midtreatment CBCT images were obtained. Initially, a 1.5-mm and 1° tolerance was used to reposition the patient following couch shifts which was subsequently reduced to 1 mm and 1° degree after the first 10 patients. Small positioning errors after the initial CBCT setup were observed, with 90% occurring within 1 mm and 97% within 1°. In analyzing the impact of the time interval for verification imaging (10 ± 3 min) and subsequent image acquisitions (17 ± 4 min), the residual setup error was not significantly different (p > 0.05). A significant difference (p = 0.04) in the average three-dimensional intrafraction positional deviations favoring a more strict tolerance in translation (1 mm vs. 1.5 mm) was observed. The absolute intrafraction motion averaged over all patients and all directions along x, y, and z axis (± SD) were 0.7 ± 0.5 mm and 0.5 ± 0.4 mm for the 1.5 mm and 1 mm tolerance, respectively. Based on a 1-mm and 1° correction threshold, the target was localized to within 1.2 mm and 0.9° with 95% confidence. Near-rigid body immobilization, intrafraction CBCT imaging approximately every 15-20 min, and strict repositioning thresholds in six degrees of freedom yields minimal intrafraction motion allowing for safe spine SBRT delivery. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Bathymetric surveying with GPS and heave, pitch, and roll compensation

    USGS Publications Warehouse

    Work, P.A.; Hansen, M.; Rogers, W.E.

    1998-01-01

    Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.

  15. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  16. Performance Analysis of Classification Methods for Indoor Localization in Vlc Networks

    NASA Astrophysics Data System (ADS)

    Sánchez-Rodríguez, D.; Alonso-González, I.; Sánchez-Medina, J.; Ley-Bosch, C.; Díaz-Vilariño, L.

    2017-09-01

    Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.

  17. The Charles F. Prentice Award Lecture 2005: optics of the human eye: progress and problems.

    PubMed

    Charman, W Neil

    2006-06-01

    The history of measurements of ocular aberration is briefly reviewed and recent work using much-improved aberrometers and large samples of eyes is summarized. When on-axis, higher-order, monochromatic aberrations are averaged, undercorrected, positive, fourth-order spherical aberration dominates; other Zernike wavefront aberration coefficients have average values near zero. Individually, however, many eyes show substantial amounts of third-order and other fourth-order aberrations; the value of these varies idiosyncratically about zero. Most normal eyes show only small amounts of axial monochromatic aberration for photopic pupils up to around 3 mm; the limits to retinal image quality are then usually set by diffraction, uncorrected or imperfectly corrected spherocylindrical refractive error, accommodation error, and chromatic aberration. Longitudinal chromatic aberration varies very little across the population. With larger mesopic and scotopic pupils, monochromatic aberration plays a more important optical role, but overall visual performance is increasingly dominated by neural factors. Some remaining problems in measuring and modeling the eye's optical performance are discussed.

  18. SU-E-T-132: Dosimetric Impact of Positioning Errors in Hypo-Fractionated Cranial Radiation Therapy Using Frameless Stereotactic BrainLAB System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeling, V; Jin, H; Ali, I

    2014-06-01

    Purpose: To determine dosimetric impact of positioning errors in the stereotactic hypo-fractionated treatment of intracranial lesions using 3Dtransaltional and 3D-rotational corrections (6D) frameless BrainLAB ExacTrac X-Ray system. Methods: 20 cranial lesions, treated in 3 or 5 fractions, were selected. An infrared (IR) optical positioning system was employed for initial patient setup followed by stereoscopic kV X-ray radiographs for position verification. 6D-translational and rotational shifts were determined to correct patient position. If these shifts were above tolerance (0.7 mm translational and 1° rotational), corrections were applied and another set of X-rays was taken to verify patient position. Dosimetric impact (D95, Dmin,more » Dmax, and Dmean of planning target volume (PTV) compared to original plans) of positioning errors for initial IR setup (XC: Xray Correction) and post-correction (XV: X-ray Verification) was determined in a treatment planning system using a method proposed by Yue et al. (Med. Phys. 33, 21-31 (2006)) with 3D-translational errors only and 6D-translational and rotational errors. Results: Absolute mean translational errors (±standard deviation) for total 92 fractions (XC/XV) were 0.79±0.88/0.19±0.15 mm (lateral), 1.66±1.71/0.18 ±0.16 mm (longitudinal), 1.95±1.18/0.15±0.14 mm (vertical) and rotational errors were 0.61±0.47/0.17±0.15° (pitch), 0.55±0.49/0.16±0.24° (roll), and 0.68±0.73/0.16±0.15° (yaw). The average changes (loss of coverage) in D95, Dmin, Dmax, and Dmean were 4.5±7.3/0.1±0.2%, 17.8±22.5/1.1±2.5%, 0.4±1.4/0.1±0.3%, and 0.9±1.7/0.0±0.1% using 6Dshifts and 3.1±5.5/0.0±0.1%, 14.2±20.3/0.8±1.7%, 0.0±1.2/0.1±0.3%, and 0.7±1.4/0.0±0.1% using 3D-translational shifts only. The setup corrections (XC-XV) improved the PTV coverage by 4.4±7.3% (D95) and 16.7±23.5% (Dmin) using 6D adjustment. Strong correlations were observed between translation errors and deviations in dose coverage for XC. Conclusion: The initial BrainLAB IR system based on rigidity of the mask-frame setup is not sufficient for accurate stereotactic positioning; however, with X-ray imageguidance sub-millimeter accuracy is achieved with negligible deviations in dose coverage. The angular corrections (mean angle summation=1.84°) are important and cause considerable deviations in dose coverage.« less

  19. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  20. Cost-effectiveness of the stream-gaging program in Missouri

    USGS Publications Warehouse

    Waite, L.A.

    1987-01-01

    This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)

  1. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  2. SU-E-J-243: Possibility of Exposure Dose Reduction of Cone-Beam Computed Tomography in An Image Guided Patient Positioning System by Using Various Noise Suppression Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamezawa, H; Fujimoto General Hospital, Miyakonojo, Miyazaki; Arimura, H

    Purpose: To investigate the possibility of exposure dose reduction of the cone-beam computed tomography (CBCT) in an image guided patient positioning system by using 6 noise suppression filters. Methods: First, a reference dose (RD) and low-dose (LD)-CBCT (X-ray volume imaging system, Elekta Co.) images were acquired with a reference dose of 86.2 mGy (weighted CT dose index: CTDIw) and various low doses of 1.4 to 43.1 mGy, respectively. Second, an automated rigid registration for three axes was performed for estimating setup errors between a planning CT image and the LD-CBCT images, which were processed by 6 noise suppression filters, i.e.,more » averaging filter (AF), median filter (MF), Gaussian filter (GF), bilateral filter (BF), edge preserving smoothing filter (EPF) and adaptive partial median filter (AMF). Third, residual errors representing the patient positioning accuracy were calculated as an Euclidean distance between the setup error vectors estimated using the LD-CBCT image and RD-CBCT image. Finally, the relationships between the residual error and CTDIw were obtained for 6 noise suppression filters, and then the CTDIw for LD-CBCT images processed by the noise suppression filters were measured at the same residual error, which was obtained with the RD-CBCT. This approach was applied to an anthropomorphic pelvic phantom and two cancer patients. Results: For the phantom, the exposure dose could be reduced from 61% (GF) to 78% (AMF) by applying the noise suppression filters to the CBCT images. The exposure dose in a prostate cancer case could be reduced from 8% (AF) to 61% (AMF), and the exposure dose in a lung cancer case could be reduced from 9% (AF) to 37% (AMF). Conclusion: Using noise suppression filters, particularly an adaptive partial median filter, could be feasible to decrease the additional exposure dose to patients in image guided patient positioning systems.« less

  3. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.

    PubMed

    Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank

    2017-07-07

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  4. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S; Chao, C; Columbia University, NY, NY

    2014-06-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less

  5. Performance measurement results for a 220 Mbps QPPM optical communication receiver with an EG/G Slik APD

    NASA Technical Reports Server (NTRS)

    Davidson, Frederic M.; Sun, Xiaoli

    1992-01-01

    The performance of a 220 Mbps quaternary pulse position modulation (QPPM) optical communication receiver with a 'Slik' silicon avalanche photodiode (APD) and a wideband transimpedance preamplifier in a small hybrid circuit module was measured. The receiver performance had been poor due to the lack of a wideband and low noise transimpedance preamplifier. With the new APB preamplifier module, the receiver achieved a bit error rate (BER) of 10 exp -6 at an average received input optical signal power of 4.2 nW, which corresponds to an average of 80 received (incident) signal photons per information bit.

  6. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  7. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion

    PubMed Central

    Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.

    2013-01-01

    Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413

  8. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  9. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  10. Positioning of head and neck patients for proton therapy using proton range probes: a proof of concept study

    NASA Astrophysics Data System (ADS)

    Hammi, A.; Placidi, L.; Weber, D. C.; Lomax, A. J.

    2018-01-01

    To exploit the full potential of proton therapy, accurate and on-line methods to verify the patient positioning and the proton range during the treatment are desirable. Here we propose and validate an innovative technique for determining patient misalignment uncertainties through the use of a small number of low dose, carefully selected proton pencil beams (‘range probes’) (RP) with sufficient energy that their residual Bragg peak (BP) position and shape can be measured on exit. Since any change of the patient orientation in relation to these beams will result in changes of the density heterogeneities through which they pass, our hypothesis is that patient misalignments can be deduced from measured changes in Bragg curve (BC) shape and range. As such, a simple and robust methodology has been developed that estimates average proton range and range dilution of the detected residual BC, in order to locate range probe positions with optimal prediction power for detecting misalignments. The validation of this RP based approach has been split into two phases. First we retrospectively investigate its potential to detect translational patient misalignments under real clinical conditions. Second, we test it for determining rotational errors of an anthropomorphic phantom that was systematically rotated using an in-house developed high precision motion stage. Simulations of RPs in these two scenarios show that this approach could potentially predict translational errors to lower than1.5 mm and rotational errors to smaller than 1° using only three or five RPs positions respectively.

  11. Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots

    NASA Astrophysics Data System (ADS)

    WANG, Wei; WANG, Lei; YUN, Chao

    2017-03-01

    Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.

  12. Assessment of accuracy, fix success rate, and use of estimated horizontal position error (EHPE) to filter inaccurate data collected by a common commercially available GPS logger.

    PubMed

    Morris, Gail; Conner, L Mike

    2017-01-01

    Global positioning system (GPS) technologies have improved the ability of researchers to monitor wildlife; however, use of these technologies is often limited by monetary costs. Some researchers have begun to use commercially available GPS loggers as a less expensive means of tracking wildlife, but data regarding performance of these devices are limited. We tested a commercially available GPS logger (i-gotU GT-120) by placing loggers at ground control points with locations known to < 30 cm. In a preliminary investigation, we collected locations every 15 minutes for several days to estimate location error (LE) and circular error probable (CEP). Using similar methods, we then investigated the influence of cover on LE, CEP, and fix success rate (FSR) by constructing cover over ground control points. We found mean LE was < 10 m and mean 50% CEP was < 7 m. FSR was not significantly influenced by cover and in all treatments remained near 100%. Cover had a minor but significant effect on LE. Denser cover was associated with higher mean LE, but the difference in LE between the no cover and highest cover treatments was only 2.2 m. Finally, the most commonly used commercially available devices provide a measure of estimated horizontal position error (EHPE) which potentially may be used to filter inaccurate locations. Using data combined from the preliminary and cover investigations, we modeled LE as a function of EHPE and number of satellites. We found support for use of both EHPE and number of satellites in predicting LE; however, use of EHPE to filter inaccurate locations resulted in the loss of many locations with low error in return for only modest improvements in LE. Even without filtering, the accuracy of the logger was likely sufficient for studies which can accept average location errors of approximately 10 m.

  13. Positioning performance of the NTCM model driven by GPS Klobuchar model parameters

    NASA Astrophysics Data System (ADS)

    Hoque, Mohammed Mainul; Jakowski, Norbert; Berdermann, Jens

    2018-03-01

    Users of the Global Positioning System (GPS) utilize the Ionospheric Correction Algorithm (ICA) also known as Klobuchar model for correcting ionospheric signal delay or range error. Recently, we developed an ionosphere correction algorithm called NTCM-Klobpar model for single frequency GNSS applications. The model is driven by a parameter computed from GPS Klobuchar model and consecutively can be used instead of the GPS Klobuchar model for ionospheric corrections. In the presented work we compare the positioning solutions obtained using NTCM-Klobpar with those using the Klobuchar model. Our investigation using worldwide ground GPS data from a quiet and a perturbed ionospheric and geomagnetic activity period of 17 days each shows that the 24-hour prediction performance of the NTCM-Klobpar is better than the GPS Klobuchar model in global average. The root mean squared deviation of the 3D position errors are found to be about 0.24 and 0.45 m less for the NTCM-Klobpar compared to the GPS Klobuchar model during quiet and perturbed condition, respectively. The presented algorithm has the potential to continuously improve the accuracy of GPS single frequency mass market devices with only little software modification.

  14. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  15. Limitations of Surface Mapping Technology in Accurately Identifying Critical Errors in Dental Students' Crown Preparations.

    PubMed

    Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G

    2018-01-01

    The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.

  16. Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.

    PubMed

    Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J

    2018-01-01

    Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.

  17. Disruption of State Estimation in the Human Lateral Cerebellum

    PubMed Central

    Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James

    2007-01-01

    The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990

  18. Note: high precision angle generator using multiple ultrasonic motors and a self-calibratable encoder.

    PubMed

    Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan; Eom, Tae Bong

    2011-11-01

    We present an angle generator with high resolution and accuracy, which uses multiple ultrasonic motors and a self-calibratable encoder. A cylindrical air bearing guides a rotational motion, and the ultrasonic motors achieve high resolution over the full circle range with a simple configuration. The self-calibratable encoder can compensate the scale error of a divided circle (signal period: 20") effectively by applying the equal-division-averaged method. The angle generator configures a position feedback control loop using the readout of the encoder. By combining the ac and dc operation mode, the angle generator produced stepwise angular motion with 0.005" resolution. We also evaluated the performance of the angle generator using a precision angle encoder and an autocollimator. The expanded uncertainty (k = 2) in the angle generation was estimated less than 0.03", which included the calibrated scale error and the nonlinearity error. © 2011 American Institute of Physics

  19. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  20. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  1. On the challenges of drawing conclusions from p-values just below 0.05

    PubMed Central

    2015-01-01

    In recent years, researchers have attempted to provide an indication of the prevalence of inflated Type 1 error rates by analyzing the distribution of p-values in the published literature. De Winter & Dodou (2015) analyzed the distribution (and its change over time) of a large number of p-values automatically extracted from abstracts in the scientific literature. They concluded there is a ‘surge of p-values between 0.041–0.049 in recent decades’ which ‘suggests (but does not prove) questionable research practices have increased over the past 25 years.’ I show the changes in the ratio of fractions of p-values between 0.041–0.049 over the years are better explained by assuming the average power has decreased over time. Furthermore, I propose that their observation that p-values just below 0.05 increase more strongly than p-values above 0.05 can be explained by an increase in publication bias (or the file drawer effect) over the years (cf. Fanelli, 2012; Pautasso, 2010, which has led to a relative decrease of ‘marginally significant’ p-values in abstracts in the literature (instead of an increase in p-values just below 0.05). I explain why researchers analyzing large numbers of p-values need to relate their assumptions to a model of p-value distributions that takes into account the average power of the performed studies, the ratio of true positives to false positives in the literature, the effects of publication bias, and the Type 1 error rate (and possible mechanisms through which it has inflated). Finally, I discuss why publication bias and underpowered studies might be a bigger problem for science than inflated Type 1 error rates, and explain the challenges when attempting to draw conclusions about inflated Type 1 error rates from a large heterogeneous set of p-values. PMID:26246976

  2. SU-F-T-638: Is There A Need For Immobilization in SRS?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masterova, K; Sethi, A; Anderson, D

    2016-06-15

    Purpose: Frameless Stereotactic radiosurgery (SRS) is increasingly used in the clinic. Cone-Beam CT (CBCT) to simulation-CT match has replaced the 3-dimensional coordinate based set up using a stereotactic localizing frame. The SRS frame however served as both a localizing and immobilizing device. We seek to measure the quality of frameless (mask based) and frame based immobilization and evaluate its impact on target dose. Methods: Each SRS patient was set up by kV on-board imaging (OBI) and then fine-tuned with CBCT. A second CBCT was done at treatment-end to ascertain intrafraction motion. We compared pre- vs post-treatment CBCT shifts for bothmore » frameless and frame based SRS patients. CBCT to sim-CT fusion was repeated for each patient off-line to assess systematic residual image registration error. Each patient was re-planned with measured shifts to assess effects on target dose. Results: We analyzed 11 patients (12 lesions) treated with frameless SRS and 6 patients (11 lesions) with a fixed frame system. Average intra-fraction iso-center positioning errors for frameless and frame-based treatments were 1.24 ± 0.57 mm and 0.28 ± 0.08 mm (mean ± s.d.) respectively. Residual error in CBCT registration was 0.24 mm. The frameless positioning uncertainties led to target dose errors in Dmin and D95 of 15.5 ± 18.4% and 6.6 ± 9.1% respectively. The corresponding errors in fixed frame SRS were much lower with Dmin and D95 reduced by 4.2 ± 6.5% and D95 2.5 ± 3.8% respectively. Conclusion: Frameless mask provides good immobilization with average patient motion of 1.2 mm during treatment. This exceeds MRI voxel dimensions (∼0.43mm) used for target delineation. Frame-based SRS provides superior patient immobilization with measureable movement no greater than the background noise of the CBCT registration. Small lesions requiring submm precision are better served with a frame based SRS.« less

  3. The importance of temporal inequality in quantifying vegetated filter strip removal efficiencies

    NASA Astrophysics Data System (ADS)

    Gall, H. E.; Schultz, D.; Mejia, A.; Harman, C. J.; Raj, C.; Goslee, S.; Veith, T.; Patterson, P. H.

    2017-12-01

    Vegetated filter strips (VFSs) are best management practices (BMPs) commonly implemented adjacent to row-cropped fields to trap overland transport of sediment and other constituents often present in agricultural runoff. VFSs are generally reported to have high sediment removal efficiencies (i.e., 70 - 95%); however, these values are typically calculated as an average of removal efficiencies observed or simulated for individual events. We argue that due to: (i) positively correlated sediment concentration-discharge relationships; (ii) strong temporal inequality exhibited by sediment transport; and (iii) decreasing VFS performance with increasing flow rates, VFS removal efficiencies over annual time scales may be significantly lower than the per-event values or averages typically reported in the literature and used in decision-making models. By applying a stochastic approach to a two-component VFS model, we investigated the extent of the disparity between two calculation methods: averaging efficiencies from each event over the course of one year, versus reporting the total annual load reduction. We examined the effects of soil texture, concentration-discharge relationship, and VFS slope to reveal the potential errors that may be incurred by ignoring the effects of temporal inequality in quantifying VFS performance. Simulation results suggest that errors can be as low as < 2% and as high as > 20%, with the differences between the two methods of removal efficiency calculations greatest for: (i) soils with high percentage of fine particulates; (ii) VFSs with higher slopes; and (iii) strongly positive concentration-discharge relationships. These results can aid in annual-scale decision making for achieving downstream water quality goals.

  4. Research on the novel FBG detection system for temperature and strain field distribution

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-chao; Yang, Jin-hua

    2017-10-01

    In order to collect the information of temperature and strain field distribution information, the novel FBG detection system was designed. The system applied linear chirped FBG structure for large bandwidth. The structure of novel FBG cover was designed as a linear change in thickness, in order to have a different response at different locations. It can obtain the temperature and strain field distribution information by reflection spectrum simultaneously. The structure of novel FBG cover was designed, and its theoretical function is calculated. Its solution is derived for strain field distribution. By simulation analysis the change trend of temperature and strain field distribution were analyzed in the conditions of different strain strength and action position, the strain field distribution can be resolved. The FOB100 series equipment was used to test the temperature in experiment, and The JSM-A10 series equipment was used to test the strain field distribution in experiment. The average error of experimental results was better than 1.1% for temperature, and the average error of experimental results was better than 1.3% for strain. There were individual errors when the strain was small in test data. It is feasibility by theoretical analysis, simulation calculation and experiment, and it is very suitable for application practice.

  5. The effect of income and occupation on body mass index among women in the Cebu Longitudinal Health and Nutrition Surveys (1983-2002).

    PubMed

    Colchero, M Arantxa; Caballero, Benjamin; Bishai, David

    2008-05-01

    We assessed the effects of changes in income and occupational activities on changes in body weight among 2952 non-pregnant women enrolled in the Cebu Longitudinal Health and Nutrition Surveys between 1983 and 2002. On average, body mass index (BMI) among women occupied in low activities was 0.29 kg/m(2) (standard error 0.11) larger compared to women occupied in heavy activities. BMI among women involved in medium activities was on average 0.12 kg/m(2) (standard error 0.05) larger compared to women occupied in heavy activities. A one-unit increase in log household income in the previous survey was associated with a small and positive change in BMI of 0.006 kg/m(2) (standard error 0.02) but the effect was not significant. The trend of increasing body mass was higher in the late 1980s than during the 1990s. These period effects were stronger for the women who were younger at baseline and for women with low or medium activity levels. Our analysis suggests a trend in the environment over the last 20 years that has increased the susceptibility of Filipino women to larger body mass.

  6. Precise positioning method for multi-process connecting based on binocular vision

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  7. WE-D-BRA-04: Online 3D EPID-Based Dose Verification for Optimum Patient Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spreeuw, H; Rozendaal, R; Olaciregui-Ruiz, I

    2015-06-15

    Purpose: To develop an online 3D dose verification tool based on EPID transit dosimetry to ensure optimum patient safety in radiotherapy treatments. Methods: A new software package was developed which processes EPID portal images online using a back-projection algorithm for the 3D dose reconstruction. The package processes portal images faster than the acquisition rate of the portal imager (∼ 2.5 fps). After a portal image is acquired, the software seeks for “hot spots” in the reconstructed 3D dose distribution. A hot spot is in this study defined as a 4 cm{sup 3} cube where the average cumulative reconstructed dose exceedsmore » the average total planned dose by at least 20% and 50 cGy. If a hot spot is detected, an alert is generated resulting in a linac halt. The software has been tested by irradiating an Alderson phantom after introducing various types of serious delivery errors. Results: In our first experiment the Alderson phantom was irradiated with two arcs from a 6 MV VMAT H&N treatment having a large leaf position error or a large monitor unit error. For both arcs and both errors the linac was halted before dose delivery was completed. When no error was introduced, the linac was not halted. The complete processing of a single portal frame, including hot spot detection, takes about 220 ms on a dual hexacore Intel Xeon 25 X5650 CPU at 2.66 GHz. Conclusion: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for various kinds of gross delivery errors. The detection of hot spots was proven to be effective for the timely detection of these errors. Current work is focused on hot spot detection criteria for various treatment sites and the introduction of a clinical pilot program with online verification of hypo-fractionated (lung) treatments.« less

  8. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    NASA Astrophysics Data System (ADS)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  9. Dosimetric consequences of translational and rotational errors in frame-less image-guided radiosurgery

    PubMed Central

    2012-01-01

    Background To investigate geometric and dosimetric accuracy of frame-less image-guided radiosurgery (IG-RS) for brain metastases. Methods and materials Single fraction IG-RS was practiced in 72 patients with 98 brain metastases. Patient positioning and immobilization used either double- (n = 71) or single-layer (n = 27) thermoplastic masks. Pre-treatment set-up errors (n = 98) were evaluated with cone-beam CT (CBCT) based image-guidance (IG) and were corrected in six degrees of freedom without an action level. CBCT imaging after treatment measured intra-fractional errors (n = 64). Pre- and post-treatment errors were simulated in the treatment planning system and target coverage and dose conformity were evaluated. Three scenarios of 0 mm, 1 mm and 2 mm GTV-to-PTV (gross tumor volume, planning target volume) safety margins (SM) were simulated. Results Errors prior to IG were 3.9 mm ± 1.7 mm (3D vector) and the maximum rotational error was 1.7° ± 0.8° on average. The post-treatment 3D error was 0.9 mm ± 0.6 mm. No differences between double- and single-layer masks were observed. Intra-fractional errors were significantly correlated with the total treatment time with 0.7mm±0.5mm and 1.2mm±0.7mm for treatment times ≤23 minutes and >23 minutes (p<0.01), respectively. Simulation of RS without image-guidance reduced target coverage and conformity to 75% ± 19% and 60% ± 25% of planned values. Each 3D set-up error of 1 mm decreased target coverage and dose conformity by 6% and 10% on average, respectively, with a large inter-patient variability. Pre-treatment correction of translations only but not rotations did not affect target coverage and conformity. Post-treatment errors reduced target coverage by >5% in 14% of the patients. A 1 mm safety margin fully compensated intra-fractional patient motion. Conclusions IG-RS with online correction of translational errors achieves high geometric and dosimetric accuracy. Intra-fractional errors decrease target coverage and conformity unless compensated with appropriate safety margins. PMID:22531060

  10. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  11. Supporting diagnosis of attention-deficit hyperactive disorder with novelty detection.

    PubMed

    Lee, Hyoung-Joo; Cho, Sungzoon; Shin, Min-Sup

    2008-03-01

    Computerized continuous performance test (CPT) is a widely used diagnostic tool for attention-deficit hyperactivity disorder (ADHD). It measures the number of correctly detected stimuli as well as response times. Typically, when calculating a cut-off score for discriminating between normal and abnormal, only the normal children's data are collected. Then the average and standard deviation of each measure or variable is computed. If any of variables is larger than 2 sigma above the average, that child is diagnosed as abnormal. We will call this approach as "T-score 70" classifier. However, its performance has a lot to be desired due to a high false negative error. In order to improve the classification accuracy we propose to use novelty detection approaches for supporting ADHD diagnosis. Novelty detection is a model building framework where a classifier is constructed using only one class of training data and a new input pattern is classified according to its similarity to the training data. A total of eight novelty detectors are introduced and applied to our ADHD datasets collected from two modes of tests, visual and auditory. They are evaluated and compared with the T-score model on validation datasets in terms of false positive and negative error rates, and area under receiver operating characteristics curve (AuROC). Experimental results show that the cut-off score of 70 is suboptimal which leads to a low false positive error but a very high false negative error. A few novelty detectors such as Parzen density estimators yield much more balanced classification performances. Moreover, most novelty detectors outperform the T-score method for most age groups statistically with a significance level of 1% in terms of AuROC. In particular, we recommend the Parzen and Gaussian density estimators, kernel principal component analysis, one-class support vector machine, and K-means clustering novelty detector which can improve upon the T-score method on average by at least 30% for the visual test and 40% for the auditory test. In addition, their performances are relatively stable over various parameter values as long as they are within reasonable ranges. The proposed novelty detection approaches can replace the T-score method which has been considered the "gold standard" for supporting ADHD diagnosis. Furthermore, they can be applied to other psychological tests where only normal data are available.

  12. Anatomical frame identification and reconstruction for repeatable lower limb joint kinematics estimates.

    PubMed

    Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio

    2008-07-19

    The quantitative description of joint mechanics during movement requires the reconstruction of the position and orientation of selected anatomical axes with respect to a laboratory reference frame. These anatomical axes are identified through an ad hoc anatomical calibration procedure and their position and orientation are reconstructed relative to bone-embedded frames normally derived from photogrammetric marker positions and used to describe movement. The repeatability of anatomical calibration, both within and between subjects, is crucial for kinematic and kinetic end results. This paper illustrates an anatomical calibration approach, which does not require anatomical landmark manual palpation, described in the literature to be prone to great indeterminacy. This approach allows for the estimate of subject-specific bone morphology and automatic anatomical frame identification. The experimental procedure consists of digitization through photogrammetry of superficial points selected over the areas of the bone covered with a thin layer of soft tissue. Information concerning the location of internal anatomical landmarks, such as a joint center obtained using a functional approach, may also be added. The data thus acquired are matched with the digital model of a deformable template bone. Consequently, the repeatability of pelvis, knee and hip joint angles is determined. Five volunteers, each of whom performed five walking trials, and six operators, with no specific knowledge of anatomy, participated in the study. Descriptive statistics analysis was performed during upright posture, showing a limited dispersion of all angles (less than 3 deg) except for hip and knee internal-external rotation (6 deg and 9 deg, respectively). During level walking, the ratio of inter-operator and inter-trial error and an absolute subject-specific repeatability were assessed. For pelvic and hip angles, and knee flexion-extension the inter-operator error was equal to the inter-trial error-the absolute error ranging from 0.1 deg to 0.9 deg. Knee internal-external rotation and ab-adduction showed, on average, inter-operator errors, which were 8% and 28% greater than the relevant inter-trial errors, respectively. The absolute error was in the range 0.9-2.9 deg.

  13. IRIS Mariner 9 Data Revisited. 1; An Instrumental Effect

    NASA Technical Reports Server (NTRS)

    Formisano, V.; Grassi, D.; Piccioni, G.; Pearl, John; Bjoraker, G.; Conrath, B.; Hanel, R.

    1999-01-01

    Small spurious features are present in data from the Mariner 9 Infrared Interferometer Spectrometer (IRIS). These represent a low amplitude replication of the spectrum with a doubled wavenumber scale. This replication arises principally from an internal reflection of the interferogram at the input window. An algorithm is provided to correct for the effect, which is at the 2% level. We believe that the small error in the uncorrected spectra does not materially affect previous results; however, it may be significant for some future studies at short wavelengths. The IRIS spectra are also affected by a coding error in the original calibration that results in only positive radiances. This reduces the effectiveness of averaging spectra to improve the signal to noise ratio at small signal levels.

  14. Spatial averaging errors in creating hemispherical reflectance (albedo) maps from directional reflectance data

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kerber, A. G.; Sellers, P. J.

    1993-01-01

    Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.

  15. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  16. Man-portable Vector Time Domain EMI Sensor and Discrimination Processing

    DTIC Science & Technology

    2012-04-16

    points of each winding are coincident. Each receiver coil is wound helically on a set of 10 grooves etched on the surface of the cube; 36- gauge wire...subset of the data, and inject various levels of noise into the position of the MPV in order to gauge the robustness of the discrimination results...as possible. The quantity φ also provides a metric to gauge goodness of fit, being essentially an average percent error: Benjamin Barrowes, Kevin

  17. Computational biomechanics to simulate the femoropopliteal intersection during knee flexion: a preliminary study.

    PubMed

    Diehm, Nicolas; Sin, Sangmun; Hoppe, Hanno; Baumgartner, Iris; Büchler, Philippe

    2011-06-01

    To assess if finite element (FE) models can be used to predict deformation of the femoropopliteal segment during knee flexion. Magnetic resonance angiography (MRA) images were acquired on the lower limbs of 8 healthy volunteers (5 men; mean age 28 ± 4 years). Images were taken in 2 natural positions, with the lower limb fully extended and with the knee bent at ~ 40°. Patient-specific FE models were developed and used to simulate the experimental situation. The displacements of the artery during knee bending as predicted by the numerical model were compared to the corresponding positions measured on the MRA images. The numerical predictions showed a good overall agreement between the calculated displacements of the motion measures from MRA images. The average position error comparing the calculated vs. actual displacements of the femoropopliteal intersection measured on the MRA was 8 ± 4 mm. Two of the 8 subjects showed large prediction errors (average 13 ± 5 mm); these 2 volunteers were the tallest subjects involved in the study and had a low body mass index (20.5 kg/m²). The present computational model is able to capture the gross mechanical environment of the femoropopliteal intersection during knee bending and provide a better understanding of the complex biomechanical behavior. However, results suggest that patient-specific mechanical properties and detailed muscle modeling are required to provide accurate patient-specific numerical predictions of arterial displacement. Further adaptation of this model is expected to provide an improved ability to predict the multiaxial deformation of this arterial segment during leg movements and to optimize future stent designs.

  18. Skull registration for prone patient position using tracked ultrasound

    NASA Astrophysics Data System (ADS)

    Underwood, Grace; Ungi, Tamas; Baum, Zachary; Lasso, Andras; Kronreif, Gernot; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Tracked navigation has become prevalent in neurosurgery. Problems with registration of a patient and a preoperative image arise when the patient is in a prone position. Surfaces accessible to optical tracking on the back of the head are unreliable for registration. We investigated the accuracy of surface-based registration using points accessible through tracked ultrasound. Using ultrasound allows access to bone surfaces that are not available through optical tracking. Tracked ultrasound could eliminate the need to work (i) under the table for registration and (ii) adjust the tracker between surgery and registration. In addition, tracked ultrasound could provide a non-invasive method in comparison to an alternative method of registration involving screw implantation. METHODS: A phantom study was performed to test the feasibility of tracked ultrasound for registration. An initial registration was performed to partially align the pre-operative computer tomography data and skull phantom. The initial registration was performed by an anatomical landmark registration. Surface points accessible by tracked ultrasound were collected and used to perform an Iterative Closest Point Algorithm. RESULTS: When the surface registration was compared to a ground truth landmark registration, the average TRE was found to be 1.6+/-0.1mm and the average distance of points off the skull surface was 0.6+/-0.1mm. CONCLUSION: The use of tracked ultrasound is feasible for registration of patients in prone position and eliminates the need to perform registration under the table. The translational component of error found was minimal. Therefore, the amount of TRE in registration is due to a rotational component of error.

  19. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  20. Spatial interpolation of solar global radiation

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Uboldi, F.; Antoniazzi, C.

    2010-09-01

    Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.

  1. Patient safety culture assessment in oman.

    PubMed

    Al-Mandhari, Ahmed; Al-Zakwani, Ibrahim; Al-Kindi, Moosa; Tawilah, Jihane; Dorvlo, Atsu S S; Al-Adawi, Samir

    2014-07-01

    To illustrate the patient safety culture in Oman as gleaned via 12 indices of patient safety culture derived from the Hospital Survey on Patient Safety Culture (HSPSC) and to compare the average positive response rates in patient safety culture between Oman and the USA, Taiwan, and Lebanon. This was a cross-sectional research study employed to gauge the performance of HSPSC safety indices among health workers representing five secondary and tertiary care hospitals in the northern region of Oman. The participants (n=398) represented different professional designations of hospital staff. Analyses were performed using univariate statistics. The overall average positive response rate for the 12 patient safety culture dimensions of the HSPSC survey in Oman was 58%. The indices from HSPSC that were endorsed the highest included 'organizational learning and continuous improvement' while conversely, 'non-punitive response to errors' was ranked the least. There were no significant differences in average positive response rates between Oman and the United States (58% vs. 61%; p=0.666), Taiwan (58% vs. 64%; p=0.386), and Lebanon (58% vs. 61%; p=0.666). This study provides the first empirical study on patient safety culture in Oman which is similar to those rates reported elsewhere. It highlights the specific strengths and weaknesses which may stem from the specific milieu prevailing in Oman.

  2. Rotational wind indicator enhances control of rotated displays

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Pavel, Misha

    1991-01-01

    Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.

  3. Individual differences in error monitoring in healthy adults: psychological symptoms and antisocial personality characteristics.

    PubMed

    Chang, Wen-Pin; Davies, Patricia L; Gavin, William J

    2010-10-01

    Recent studies have investigated the relationship between psychological symptoms and personality traits and error monitoring measured by error-related negativity (ERN) and error positivity (Pe) event-related potential (ERP) components, yet there remains a paucity of studies examining the collective simultaneous effects of psychological symptoms and personality traits on error monitoring. This present study, therefore, examined whether measures of hyperactivity-impulsivity, depression, anxiety and antisocial personality characteristics could collectively account for significant interindividual variability of both ERN and Pe amplitudes, in 29 healthy adults with no known disorders, ages 18-30 years. The bivariate zero-order correlation analyses found that only the anxiety measure was significantly related to both ERN and Pe amplitudes. However, multiple regression analyses that included all four characteristic measures while controlling for number of segments in the ERP average revealed that both depression and antisocial personality characteristics were significant predictors for the ERN amplitudes whereas antisocial personality was the only significant predictor for the Pe amplitude. These findings suggest that psychological symptoms and personality traits are associated with individual variations in error monitoring in healthy adults, and future studies should consider these variables when comparing group difference in error monitoring between adults with and without disabilities. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  4. Influence of Forecast Accuracy of Photovoltaic Power Output on Facility Planning and Operation of Microgrid under 30 min Power Balancing Control

    NASA Astrophysics Data System (ADS)

    Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo

    A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.

  5. Neurosurgical robotic arm drilling navigation system.

    PubMed

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Evaluation of the accuracy and clinical practicality of a calculation system for patient positional displacement in carbon ion radiotherapy at five sites.

    PubMed

    Kubota, Yoshiki; Hayashi, Hayato; Abe, Satoshi; Souda, Saki; Okada, Ryosuke; Ishii, Takayoshi; Tashiro, Mutsumi; Torikoshi, Masami; Kanai, Tatsuaki; Ohno, Tatsuya; Nakano, Takashi

    2018-03-01

    We developed a system for calculating patient positional displacement between digital radiography images (DRs) and digitally reconstructed radiography images (DRRs) to reduce patient radiation exposure, minimize individual differences between radiological technologists in patient positioning, and decrease positioning time. The accuracy of this system at five sites was evaluated with clinical data from cancer patients. The dependence of calculation accuracy on the size of the region of interest (ROI) and initial position was evaluated for clinical use. For a preliminary verification, treatment planning and positioning data from eight setup patterns using a head and neck phantom were evaluated. Following this, data from 50 patients with prostate, lung, head and neck, liver, or pancreatic cancer (n = 10 each) were evaluated. Root mean square errors (RMSEs) between the results calculated by our system and the reference positions were assessed. The reference positions were manually determined by two radiological technologists to best-matching positions with orthogonal DRs and DRRs in six axial directions. The ROI size dependence was evaluated by comparing RMSEs for three different ROI sizes. Additionally, dependence on initial position parameters was evaluated by comparing RMSEs for four position patterns. For the phantom study, the average (± standard deviation) translation error was 0.17 ± 0.05, rotation error was 0.17 ± 0.07, and ΔD was 0.14 ± 0.05. Using the optimal ROI size for each patient site, all cases of prostate, lung, and head and neck cancer with initial position parameters of 10 mm or under were acceptable in our tolerance. However, only four liver cancer cases and three pancreatic cancer cases were acceptable, because of low-reproducibility regions in the ROIs. Our system has clinical practicality for prostate, lung, and head and neck cancer cases. Additionally, our findings suggest ROI size dependence in some cases. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  7. The effect of tropospheric fluctuations on the accuracy of water vapor radiometry

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1992-01-01

    Line-of-sight path delay calibration accuracies of 1 mm are needed to improve both angular and Doppler tracking capabilities. Fluctuations in the refractivity of tropospheric water vapor limit the present accuracies to about 1 nrad for the angular position and to a delay rate of 3x10(exp -13) sec/sec over a 100-sec time interval for Doppler tracking. This article describes progress in evaluating the limitations of the technique of water vapor radiometry at the 1-mm level. The two effects evaluated here are: (1) errors arising from tip-curve calibration of WVR's in the presence of tropospheric fluctuations and (2) errors due to the use of nonzero beamwidths for water vapor radiometer (WVR) horns. The error caused by tropospheric water vapor fluctuations during instrument calibration from a single tip curve is 0.26 percent in the estimated gain for a tip-curve duration of several minutes or less. This gain error causes a 3-mm bias and a 1-mm scale factor error in the estimated path delay at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor column density present in the troposphere during the astrometric observation. The error caused by WVR beam averaging of tropospheric fluctuations is 3 mm at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor (and is proportionally higher for higher water vapor content) for current WVR beamwidths (full width at half maximum of approximately 6 deg). This is a stochastic error (which cannot be calibrated) and which can be reduced to about half of its instantaneous value by time averaging the radio signal over several minutes. The results presented here suggest two improvements to WVR design: first, the gain of the instruments should be stabilized to 4 parts in 10(exp 4) over a calibration period lasting 5 hours, and second, the WVR antenna beamwidth should be reduced to about 0.2 deg. This will reduce the error induced by water vapor fluctuations in the estimated path delays to less than 1 mm for the elevation range from zenith to 6 deg for most observation weather conditions.

  8. Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas

    USGS Publications Warehouse

    Puente, Celso

    1978-01-01

    The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.

  9. Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties

    NASA Astrophysics Data System (ADS)

    Hua, W.

    2017-12-01

    Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.

  10. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  11. An electrophysiological signal that precisely tracks the emergence of error awareness

    PubMed Central

    Murphy, Peter R.; Robertson, Ian H.; Allen, Darren; Hester, Robert; O'Connell, Redmond G.

    2012-01-01

    Recent electrophysiological research has sought to elucidate the neural mechanisms necessary for the conscious awareness of action errors. Much of this work has focused on the error positivity (Pe), a neural signal that is specifically elicited by errors that have been consciously perceived. While awareness appears to be an essential prerequisite for eliciting the Pe, the precise functional role of this component has not been identified. Twenty-nine participants performed a novel variant of the Go/No-go Error Awareness Task (EAT) in which awareness of commission errors was indicated via a separate speeded manual response. Independent component analysis (ICA) was used to isolate the Pe from other stimulus- and response-evoked signals. Single-trial analysis revealed that Pe peak latency was highly correlated with the latency at which awareness was indicated. Furthermore, the Pe was more closely related to the timing of awareness than it was to the initial erroneous response. This finding was confirmed in a separate study which derived IC weights from a control condition in which no indication of awareness was required, thus ruling out motor confounds. A receiver-operating-characteristic (ROC) curve analysis showed that the Pe could reliably predict whether an error would be consciously perceived up to 400 ms before the average awareness response. Finally, Pe latency and amplitude were found to be significantly correlated with overall error awareness levels between subjects. Our data show for the first time that the temporal dynamics of the Pe trace the emergence of error awareness. These findings have important implications for interpreting the results of clinical EEG studies of error processing. PMID:22470332

  12. SU-F-J-34: Automatic Target-Based Patient Positioning Framework for Image-Guided Radiotherapy in Prostate Cancer Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasahara, M; Arimura, H; Hirose, T

    Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less

  13. Sci-Thur AM: YIS – 05: Prediction of lung tumor motion using a generalized neural network optimized from the average prediction outcome of a group of patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teo, Troy; Alayoubi, Nadia; Bruce, Neil

    Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons formore » the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.« less

  14. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795

  15. New reusable elastomer electrodes for assessing body composition

    NASA Astrophysics Data System (ADS)

    Moreno, M.-V.; Chaset, L.; Bittner, P. A.; Barthod, C.; Passard, M.

    2013-04-01

    The development of telemedicine requires finding solutions of reusable electrodes for use in patients' homes. The objective of this study is to evaluate the relevance of reusable elastomer electrodes for measuring body composition. We measured a population of healthy Caucasian (n = 17). A measurement was made with a reference device, the Xitron®, associated with AgCl Gel electrodes (Gel) and another measurement with a multifrequency impedancemeter Z-Metrix® associated with reusable elastomer electrodes (Elast). We obtained a low variability with an average error of repeatability of 0.39% for Re and 0.32% for Rinf. There is a non significantly difference (P T-test > 0.1) about 200 ml between extracellular water Ve measured with Gel and Elast in supine and in standing position. For total body water Vt, we note a non significantly difference (P T-test > 0.1) about 100 ml and 2.2 1 respectively in supine and standing position. The results give low dispersion, with R2 superior to 0.90, with a 1.5% maximal error between Gel and Elast on Ve in standing position. It looks possible, taking a few precautions, using elastomer electrodes for assessing body composition.

  16. Beck's cognitive theory and the response style theory of depression in adolescents with and without mild to borderline intellectual disability.

    PubMed

    Weeland, Martine M; Nijhof, Karin S; Otten, R; Vermaes, Ignace P R; Buitelaar, Jan K

    2017-10-01

    This study tests the validity of Beck's cognitive theory and Nolen-Hoeksema's response style theory of depression in adolescents with and without MBID. The relationship between negative cognitive errors (Beck), response styles (Nolen-Hoeksema) and depressive symptoms was examined in 135 adolescents using linear regression. The cognitive error 'underestimation of the ability to cope' was more prevalent among adolescents with MBID than among adolescents with average intelligence. This was the only negative cognitive error that predicted depressive symptoms. There were no differences between groups in the prevalence of the three response styles. In line with the theory, ruminating was positively and problem-solving was negatively related to depressive symptoms. Distractive response styles were not related to depressive symptoms. The relationship between response styles, cognitive errors and depressive symptoms were similar for both groups. The main premises of both theories of depression are equally applicable to adolescents with and without MBID. The cognitive error 'Underestimation of the ability to cope' poses a specific risk factor for developing a depression for adolescents with MBID and requires special attention in treatment and prevention of depression. WHAT THIS PAPER ADDS?: Despite the high prevalence of depression among adolescents with MBID, little is known about the etiology and cognitive processes that play a role in the development of depression in this group. The current paper fills this gap in research by examining the core tenets of two important theories on the etiology of depression (Beck's cognitive theory and Nolen-Hoeksema's response style theory) in a clinical sample of adolescents with and without MBID. This paper demonstrated that the theories are equally applicable to adolescents with MBID, as to adolescents with average intellectual ability. However, the cognitive bias 'underestimation of the ability to cope' was the only cognitive error related to depressive symptoms, and was much more prevalent among adolescents with MBID than among adolescents with average intellectual ability. This suggests that underestimating one's coping skills may be a unique risk factor for depression among adolescents with MBID. This knowledge is important in understanding the causes and perpetuating mechanisms of depression in adolescents with MBID, and for the development of prevention- and treatment programs for adolescents with MBID. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGES

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  18. Accuracy of lesion boundary tracking in navigated breast tumor excision

    NASA Astrophysics Data System (ADS)

    Heffernan, Emily; Ungi, Tamas; Vaughan, Thomas; Pezeshki, Padina; Lasso, Andras; Gauvin, Gabrielle; Rudan, John; Engel, C. Jay; Morin, Evelyn; Fichtinger, Gabor

    2016-03-01

    PURPOSE: An electromagnetic navigation system for tumor excision in breast conserving surgery has recently been developed. Preoperatively, a hooked needle is positioned in the tumor and the tumor boundaries are defined in the needle coordinate system. The needle is tracked electromagnetically throughout the procedure to localize the tumor. However, the needle may move and the tissue may deform, leading to errors in maintaining a correct excision boundary. It is imperative to quantify these errors so the surgeon can choose an appropriate resection margin. METHODS: A commercial breast biopsy phantom with several inclusions was used. Location and shape of a lesion before and after mechanical deformation were determined using 3D ultrasound volumes. Tumor location and shape were estimated from initial contours and tracking data. The difference in estimated and actual location and shape of the lesion after deformation was quantified using the Hausdorff distance. Data collection and analysis were done using our 3D Slicer software application and PLUS toolkit. RESULTS: The deformation of the breast resulted in 3.72 mm (STD 0.67 mm) average boundary displacement for an isoelastic lesion and 3.88 mm (STD 0.43 mm) for a hyperelastic lesion. The difference between the actual and estimated tracked tumor boundary was 0.88 mm (STD 0.20 mm) for the isoelastic and 1.78 mm (STD 0.18 mm) for the hyperelastic lesion. CONCLUSION: The average lesion boundary tracking error was below 2mm, which is clinically acceptable. We suspect that stiffness of the phantom tissue affected the error measurements. Results will be validated in patient studies.

  19. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardis, F. De; Aiola, S.; Vavagiakis, E. M.

    Here, we present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariancemore » matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less

  20. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardis, F. De; Vavagiakis, E.M.; Niemack, M.D.

    We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrixmore » of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less

  1. Detection of the Pairwise Kinematic Sunyaev-Zel'dovich Effect with BOSS DR11 and the Atacama Cosmology Telescope

    NASA Technical Reports Server (NTRS)

    De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.; hide

    2017-01-01

    We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.

  2. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    NASA Astrophysics Data System (ADS)

    De Bernardis, F.; Aiola, S.; Vavagiakis, E. M.; Battaglia, N.; Niemack, M. D.; Beall, J.; Becker, D. T.; Bond, J. R.; Calabrese, E.; Cho, H.; Coughlin, K.; Datta, R.; Devlin, M.; Dunkley, J.; Dunner, R.; Ferraro, S.; Fox, A.; Gallardo, P. A.; Halpern, M.; Hand, N.; Hasselfield, M.; Henderson, S. W.; Hill, J. C.; Hilton, G. C.; Hilton, M.; Hincks, A. D.; Hlozek, R.; Hubmayr, J.; Huffenberger, K.; Hughes, J. P.; Irwin, K. D.; Koopman, B. J.; Kosowsky, A.; Li, D.; Louis, T.; Lungu, M.; Madhavacheril, M. S.; Maurin, L.; McMahon, J.; Moodley, K.; Naess, S.; Nati, F.; Newburgh, L.; Nibarger, J. P.; Page, L. A.; Partridge, B.; Schaan, E.; Schmitt, B. L.; Sehgal, N.; Sievers, J.; Simon, S. M.; Spergel, D. N.; Staggs, S. T.; Stevens, J. R.; Thornton, R. J.; van Engelen, A.; Van Lanen, J.; Wollack, E. J.

    2017-03-01

    We present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.

  3. Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    DOE PAGES

    Bernardis, F. De; Aiola, S.; Vavagiakis, E. M.; ...

    2017-03-07

    Here, we present a new measurement of the kinematic Sunyaev-Zel'dovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariancemore » matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zel'dovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.« less

  4. Accounting for heterogeneous treatment effects in the FDA approval process.

    PubMed

    Malani, Anup; Bembom, Oliver; van der Laan, Mark

    2012-01-01

    The FDA employs an average-patient standard when reviewing drugs: it approves a drug only if is safe and effective for the average patient in a clinical trial. It is common, however, for patients to respond differently to a drug. Therefore, the average-patient standard can reject a drug that benefits certain patient subgroups (false negatives) and even approve a drug that harms other patient subgroups (false positives). These errors increase the cost of drug development - and thus health care - by wasting research on unproductive or unapproved drugs. The reason why the FDA sticks with an average patient standard is concern about opportunism by drug companies. With enough data dredging, a drug company can always find some subgroup of patients that appears to benefit from its drug, even if the subgroup truly does not. In this paper we offer alternatives to the average patient standard that reduce the risk of false negatives without increasing false positives from drug company opportunism. These proposals combine changes to institutional design - evaluation of trial data by an independent auditor - with statistical tools to reinforce the new institutional design - specifically, to ensure the auditor is truly independent of drug companies. We illustrate our proposals by applying them to the results of a recent clinical trial of a cancer drug (motexafin gadolinium). Our analysis suggests that the FDA may have made a mistake in rejecting that drug.

  5. Research on the error model of airborne celestial/inertial integrated navigation system

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaoqiang; Deng, Xiaoguo; Yang, Xiaoxu; Dong, Qiang

    2015-02-01

    Celestial navigation subsystem of airborne celestial/inertial integrated navigation system periodically correct the positioning error and heading drift of the inertial navigation system, by which the inertial navigation system can greatly improve the accuracy of long-endurance navigation. Thus the navigation accuracy of airborne celestial navigation subsystem directly decides the accuracy of the integrated navigation system if it works for long time. By building the mathematical model of the airborne celestial navigation system based on the inertial navigation system, using the method of linear coordinate transformation, we establish the error transfer equation for the positioning algorithm of airborne celestial system. Based on these we built the positioning error model of the celestial navigation. And then, based on the positioning error model we analyze and simulate the positioning error which are caused by the error of the star tracking platform with the MATLAB software. Finally, the positioning error model is verified by the information of the star obtained from the optical measurement device in range and the device whose location are known. The analysis and simulation results show that the level accuracy and north accuracy of tracking platform are important factors that limit airborne celestial navigation systems to improve the positioning accuracy, and the positioning error have an approximate linear relationship with the level error and north error of tracking platform. The error of the verification results are in 1000m, which shows that the model is correct.

  6. Real-Time Verification of a High-Dose-Rate Iridium 192 Source Position Using a Modified C-Arm Fluoroscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki

    Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less

  7. Real-Time Verification of a High-Dose-Rate Iridium 192 Source Position Using a Modified C-Arm Fluoroscope.

    PubMed

    Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou

    2017-03-15

    High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Accuracy of a Basketball Indoor Tracking System Based on Standard Bluetooth Low Energy Channels (NBN23®).

    PubMed

    Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime

    2018-06-14

    The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.

  9. SU-E-J-58: Comparison of Conformal Tracking Methods Using Initial, Adaptive and Preceding Image Frames for Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teo, P; Guo, K; Alayoubi, N

    Purpose: Accounting for tumor motion during radiation therapy is important to ensure that the tumor receives the prescribed dose. Increasing the field size to account for this motion exposes the surrounding healthy tissues to unnecessary radiation. In contrast to using motion-encompassing techniques to treat moving tumors, conformal radiation therapy (RT) uses a smaller field to track the tumor and adapts the beam aperture according to the motion detected. This work investigates and compares the performance of three markerless, EPID based, optical flow methods to track tumor motion with conformal RT. Methods: Three techniques were used to track the motions ofmore » a 3D printed lung tumor programmed to move according to the tumor of seven lung cancer patients. These techniques utilized a multi-resolution optical flow algorithm as the core computation for image registration. The first method (DIR) registers the incoming images with an initial reference frame, while the second method (RFSF) uses an adaptive reference frame and the third method (CU) uses preceding image frames for registration. The patient traces and errors were evaluated for the seven patients. Results: The average position errors for all patient traces were 0.12 ± 0.33 mm, −0.05 ± 0.04 mm and −0.28 ± 0.44 mm for CU, DIR and RFSF method respectively. The position errors distributed within 1 standard deviation are 0.74 mm, 0.37 mm and 0.96 mm respectively. The CU and RFSF algorithms are sensitive to the characteristics of the patient trace and produce a wider distribution of errors amongst patients. Although the mean error for the DIR method is negatively biased (−0.05 mm) for all patients, it has the narrowest distribution of position error, which can be corrected using an offset calibration. Conclusion: Three techniques of image registration and position update were studied. Using direct comparison with an initial frame yields the best performance. The authors would like to thank Dr.YeLin Suh for making the Cyberknife dataset available to us. Scholarship funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and CancerCare Manitoba Foundation is acknowledged.« less

  10. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  11. Evaluation of the table Mountain Ronchi telescope for angular tracking

    NASA Technical Reports Server (NTRS)

    Lanyi, G.; Purcell, G.; Treuhaft, R.; Buffington, A.

    1992-01-01

    The performance of the University of California at San Diego (UCSD) Table Mountain telescope was evaluated to determine the potential of such an instrument for optical angular tracking. This telescope uses a Ronchi ruling to measure differential positions of stars at the meridian. The Ronchi technique is summarized and the operational features of the Table Mountain instrument are described. Results from an analytic model, simulations, and actual data are presented that characterize the telescope's current performance. For a star pair of visual magnitude 7, the differential uncertainty of a 5-min observation is about 50 nrad (10 marcsec), and tropospheric fluctuations are the dominant error source. At magnitude 11, the current differential uncertainty is approximately 800 nrad (approximately 170 marcsec). This magnitude is equivalent to that of a 2-W laser with a 0.4-m aperture transmitting to Earth from a spacecraft at Saturn. Photoelectron noise is the dominant error source for stars of visual magnitude 8.5 and fainter. If the photoelectron noise is reduced, ultimately tropospheric fluctuations will be the limiting source of error at an average level of 35 nrad (7 marcsec) for stars approximately 0.25 deg apart. Three near-term strategies are proposed for improving the performance of the telescope to the 10-nrad level: improving the efficiency of the optics, masking background starlight, and averaging tropospheric fluctuations over multiple observations.

  12. PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James

    We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less

  13. Indoor localization using pedestrian dead reckoning updated with RFID-based fiducials.

    PubMed

    House, Samuel; Connell, Sean; Milligan, Ian; Austin, Daniel; Hayes, Tamara L; Chiang, Patrick

    2011-01-01

    We describe a low-cost wearable system that tracks the location of individuals indoors using commonly available inertial navigation sensors fused with radio frequency identification (RFID) tags placed around the smart environment. While conventional pedestrian dead reckoning (PDR) calculated with an inertial measurement unit (IMU) is susceptible to sensor drift inaccuracies, the proposed wearable prototype fuses the drift-sensitive IMU with a RFID tag reader. Passive RFID tags placed throughout the smart-building then act as fiducial markers that update the physical locations of each user, thereby correcting positional errors and sensor inaccuracy. Experimental measurements taken for a 55 m × 20 m 2D floor space indicate an over 1200% improvement in average error rate of the proposed RFID-fused system over dead reckoning alone.

  14. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  15. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  17. Evaluation of a pulmonary strain model by registration of dynamic CT scans

    NASA Astrophysics Data System (ADS)

    Pomeroy, Marc; Liang, Zhengrong; Brehm, Anthony

    2017-03-01

    Idiopathic pulmonary fibrosis (IPF) is a chronic fibrotic lung disease that develops in adults without any known cause. It is an interstitial lung disease in which the lung tissue becomes scarred and stiffens, ultimately leading to respiratory failure. This disease currently has no cure with limited treatment options, leading to an average survival time of 3-5 years after diagnosis. In this paper we employ a mathematical model simulating the lung parenchyma as hexagons with elastic forces applied to connecting vertices and opposing vertices. Using an image registration algorithm, we obtain trajectories of 4D-CT scans of a healthy patient, and one suffering from IPF. Converting the image trajectories into a hexagonal lattice, we fit the model parameters to match the respiratory motion seen for both patients across multiple image slices. We found the model could decently describe the healthy lung slices, with a minimum average error between corresponding vertices to be 1.66 mm. For the fibrotic lung slices the model was less accurate, maintaining a higher average error across all slices. Using the optimized parameters, we apply the forces predicted from the model using the image trajectory positions for each phase. Although the error is large, the spring constant values determined for the fibrotic patient were not as high as we expected, and more often than not determined to be lower than corresponding healthy lung slices. However, the net force distribution for some of those slices was still found to be greater than the healthy lung counterparts. Other modifications to the model, including additional directional components and which vertices were receiving with the limited sample size available, a clear distinction between the healthy and fibrotic lung cannot yet be made by this model.

  18. Horizontal plane localization in single-sided deaf adults fitted with a bone-anchored hearing aid (Baha).

    PubMed

    Grantham, D Wesley; Ashmead, Daniel H; Haynes, David S; Hornsby, Benjamin W Y; Labadie, Robert F; Ricketts, Todd A

    2012-01-01

    : One purpose of this investigation was to evaluate the effect of a unilateral bone-anchored hearing aid (Baha) on horizontal plane localization performance in single-sided deaf adults who had either a conductive or sensorineural hearing loss in their impaired ear. The use of a 33-loudspeaker array allowed for a finer response measure than has previously been used to investigate localization in this population. In addition, a detailed analysis of error patterns allowed an evaluation of the contribution of random error and bias error to the total rms error computed in the various conditions studied. A second purpose was to investigate the effect of stimulus duration and head-turning on localization performance. : Two groups of single-sided deaf adults were tested in a localization task in which they had to identify the direction of a spoken phrase on each trial. One group had a sensorineural hearing loss (SNHL group; N = 7), and the other group had a conductive hearing loss (CHL group; N = 5). In addition, a control group of four normal-hearing adults was tested. The spoken phrase was either 1250 msec in duration (a male saying "Where am I coming from now?") or 341 msec in duration (the same male saying "Where?"). For the longer-duration phrase, subjects were tested in conditions in which they either were or were not allowed to move their heads before the termination of the phrase. The source came from one of nine positions in the front horizontal plane (from -79° to +79°). The response range included 33 choices (from -90° to +90°, separated by 5.6°). Subjects were tested in all stimulus conditions, both with and without the Baha device. Overall rms error was computed for each condition. Contributions of random error and bias error to the overall error were also computed. : There was considerable intersubject variability in all conditions. However, for the CHL group, the average overall error was significantly smaller when the Baha was on than when it was off. Further analysis of error patterns indicated that this improvement was primarily based on reduced response bias when the device was on; that is, the average response azimuth was nearer to the source azimuth when the device was on than when it was off. The SNHL group, on the other hand, had significantly greater overall error when the Baha was on than when it was off. Collapsed across listening conditions and groups, localization performance was significantly better with the 1250 msec stimulus than with the 341 msec stimulus. However, for the longer-duration stimulus, there was no significant beneficial effect of head-turning. Error scores in all conditions for both groups were considerably larger than those in the normal-hearing control group. : On average, single-sided deaf adults with CHL showed improved localization ability when using the Baha, whereas single-sided deaf adults with SNHL showed a decrement in performance when using the device. These results may have implications for clinical counseling for patients with unilateral hearing impairment.

  19. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  20. Technical aspects of real time positron emission tracking for gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberland, Marc; Xu, Tong, E-mail: txu@physics.carleton.ca; McEwen, Malcolm R.

    2016-02-15

    Purpose: Respiratory motion can lead to treatment errors in the delivery of radiotherapy treatments. Respiratory gating can assist in better conforming the beam delivery to the target volume. We present a study of the technical aspects of a real time positron emission tracking system for potential use in gated radiotherapy. Methods: The tracking system, called PeTrack, uses implanted positron emission markers and position sensitive gamma ray detectors to track breathing motion in real time. PeTrack uses an expectation–maximization algorithm to track the motion of fiducial markers. A normalized least mean squares adaptive filter predicts the location of the markers amore » short time ahead to account for system response latency. The precision and data collection efficiency of a prototype PeTrack system were measured under conditions simulating gated radiotherapy. The lung insert of a thorax phantom was translated in the inferior–superior direction with regular sinusoidal motion and simulated patient breathing motion (maximum amplitude of motion ±10 mm, period 4 s). The system tracked the motion of a {sup 22}Na fiducial marker (0.34 MBq) embedded in the lung insert every 0.2 s. The position of the was marker was predicted 0.2 s ahead. For sinusoidal motion, the equation used to model the motion was fitted to the data. The precision of the tracking was estimated as the standard deviation of the residuals. Software was also developed to communicate with a Linac and toggle beam delivery. In a separate experiment involving a Linac, 500 monitor units of radiation were delivered to the phantom with a 3 × 3 cm photon beam and with 6 and 10 MV accelerating potential. Radiochromic films were inserted in the phantom to measure spatial dose distribution. In this experiment, the period of motion was set to 60 s to account for beam turn-on latency. The beam was turned off when the marker moved outside of a 5-mm gating window. Results: The precision of the tracking in the IS direction was 0.53 mm for a sinusoidally moving target, with an average count rate ∼250 cps. The average prediction error was 1.1 ± 0.6 mm when the marker moved according to irregular patient breathing motion. Across all beam deliveries during the radiochromic film measurements, the average prediction error was 0.8 ± 0.5 mm. The maximum error was 2.5 mm and the 95th percentile error was 1.5 mm. Clear improvement of the dose distribution was observed between gated and nongated deliveries. The full-width at halfmaximum of the dose profiles of gated deliveries differed by 3 mm or less than the static reference dose distribution. Monitoring of the beam on/off times showed synchronization with the location of the marker within the latency of the system. Conclusions: PeTrack can track the motion of internal fiducial positron emission markers with submillimeter precision. The system can be used to gate the delivery of a Linac beam based on the position of a moving fiducial marker. This highlights the potential of the system for use in respiratory-gated radiotherapy.« less

  1. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  2. Analytical Evaluation of a Method of Midcourse Guidance for Rendezvous with Earth Satellites

    NASA Technical Reports Server (NTRS)

    Eggleston, John M.; Dunning, Robert S.

    1961-01-01

    A digital-computer simulation was made of the midcourse or ascent phase of a rendezvous between a ferry vehicle and a space station. The simulation involved a closed-loop guidance system in which both the relative position and relative velocity between ferry and station are measured (by simulated radar) and the relative-velocity corrections required to null the miss distance are computed and applied. The results are used to study the effectiveness of a particular set of guidance equations and to study the effects of errors in the launch conditions and errors in the navigation data. A number of trajectories were investigated over a variety of initial conditions for cases in which the space station was in a circular orbit and also in an elliptic orbit. Trajectories are described in terms of a rotating coordinate system fixed in the station. As a result of this study the following conclusions are drawn. Successful rendezvous can be achieved even with launch conditions which are substantially less accurate than those obtained with present-day techniques. The average total-velocity correction required during the midcourse phase is directly proportional to the radar accuracy but the miss distance is not. Errors in the time of booster burnout or in the position of the ferry at booster burnout are less important than errors in the ferry velocity at booster burnout. The use of dead bands to account for errors in the navigational (radar) equipment appears to depend upon a compromise between the magnitude of the velocity corrections to be made and the allowable miss distance at the termination of the midcourse phase of the rendezvous. When approximate guidance equations are used, there are limits on their accuracy which are dependent on the angular distance about the earth to the expected point of rendezvous.

  3. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  4. Concept and simulation study of a novel localization method for robotic endoscopic capsules using multiple positron emission markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Than, Trung Duc, E-mail: dtt581@uowmail.edu.au; Alici, Gursel, E-mail: gursel@uow.edu.au; Zhou, Hao, E-mail: hz467@uowmail.edu.au

    2014-07-15

    Purpose: Over the last decade, wireless capsule endoscope has been the tool of choice for noninvasive inspection of the gastrointestinal tract, especially in the small intestine. However, the latest clinical products have not been equipped with a sufficiently accurate localization system which makes it difficult to determine the location of intestinal abnormalities, and to apply follow-up interventions such as biopsy or drug delivery. In this paper, the authors present a novel localization method based on tracking three positron emission markers embedded inside an endoscopic capsule. Methods: Three spherical {sup 22}Na markers with diameters of less than 1 mm are embeddedmore » in the cover of the capsule. Gamma ray detectors are arranged around a patient body to detect coincidence gamma rays emitted from the three markers. The position of each marker can then be estimated using the collected data by the authors’ tracking algorithm which consists of four consecutive steps: a method to remove corrupted data, an initialization method, a clustering method based on the Fuzzy C-means clustering algorithm, and a failure prediction method. Results: The tracking algorithm has been implemented inMATLAB utilizing simulation data generated from the Geant4 Application for Emission Tomography toolkit. The results show that this localization method can achieve real-time tracking with an average position error of less than 0.4 mm and an average orientation error of less than 2°. Conclusions: The authors conclude that this study has proven the feasibility and potential of the proposed technique in effectively determining the position and orientation of a robotic endoscopic capsule.« less

  5. Interferometric correction system for a numerically controlled machine

    DOEpatents

    Burleson, Robert R.

    1978-01-01

    An interferometric correction system for a numerically controlled machine is provided to improve the positioning accuracy of a machine tool, for example, for a high-precision numerically controlled machine. A laser interferometer feedback system is used to monitor the positioning of the machine tool which is being moved by command pulses to a positioning system to position the tool. The correction system compares the commanded position as indicated by a command pulse train applied to the positioning system with the actual position of the tool as monitored by the laser interferometer. If the tool position lags the commanded position by a preselected error, additional pulses are added to the pulse train applied to the positioning system to advance the tool closer to the commanded position, thereby reducing the lag error. If the actual tool position is leading in comparison to the commanded position, pulses are deleted from the pulse train where the advance error exceeds the preselected error magnitude to correct the position error of the tool relative to the commanded position.

  6. Evaluation of LiDAR-acquired bathymetric and topographic data accuracy in various hydrogeomorphic settings in the Deadwood and South Fork Boise Rivers, West-Central Idaho, 2007

    USGS Publications Warehouse

    Skinner, Kenneth D.

    2011-01-01

    High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The average error for the wetted stream channel surface areas was -0.5 percent, while the average error for the wetted stream channel volume was -8.3 percent. The volume of the wetted river channel was underestimated by an average of 31 percent in half of the survey areas, and overestimated by an average of 14 percent in the remainder of the survey areas. The EAARL system is an efficient way to obtain topographic and bathymetric data in large areas of remote terrain. The elevation accuracy of the EAARL system varies throughout the area depending upon the hydrogeomorphic setting, preventing the use of a single accuracy value to describe the EAARL system. The elevation accuracy variations should be kept in mind when using the data, such as for hydraulic modeling or aquatic habitat assessments.

  7. Same Day Identification and Full Panel Antimicrobial Susceptibility Testing of Bacteria from Positive Blood Culture Bottles Made Possible by a Combined Lysis-Filtration Method with MALDI-TOF VITEK Mass Spectrometry and the VITEK2 System

    PubMed Central

    Machen, Alexandra; Drake, Tim; Wang, Yun F. (Wayne)

    2014-01-01

    Rapid identification and antimicrobial susceptibility testing of microorganisms causing bloodstream infections or sepsis have the potential to improve patient care. This proof-of-principle study evaluates the Lysis-Filtration Method for identification as well as antimicrobial susceptibility testing of bacteria directly from positive blood culture bottles in a clinical setting. A total of 100 non-duplicated positive blood cultures were tested and 1012 microorganism-antimicrobial combinations were assessed. An aliquot of non-charcoal blood culture broth was incubated with lysis buffer briefly before being filtered and washed. Microorganisms recovered from the filter membrane were first identified by using Matrix-Assisted Laser Desorption/Ionization Time-of-Flight VITEK® Mass Spectrometry (VITEK MS). After quick identification from VITEK MS, filtered microorganisms were inoculated to VITEK®2 system for full panel antimicrobial susceptibility testing analysis. Of 100 bottles tested, the VITEK MS resulted in 94.0% correct organism identification to the species level. Compared to the conventional antimicrobial susceptibility testing methods, direct antimicrobial susceptibility testing from VITEK®2 resulted in 93.5% (946/1012) category agreement of antimicrobials tested, with 3.6% (36/1012) minor error, 1.7% (7/1012) major error, and 1.3% (13/1012) very major error of antimicrobials. The average time to identification and antimicrobial susceptibility testing was 11.4 hours by using the Lysis-Filtration method for both VITEK MS and VITEK®2 compared to 56.3 hours by using conventional methods (p<0.00001). Thus, the same-day results of microorganism identification and antimicrobial susceptibility testing directly from positive blood culture can be achieved and can be used for appropriate antibiotic therapy and antibiotic stewardship. PMID:24551067

  8. Same day identification and full panel antimicrobial susceptibility testing of bacteria from positive blood culture bottles made possible by a combined lysis-filtration method with MALDI-TOF VITEK mass spectrometry and the VITEK2 system.

    PubMed

    Machen, Alexandra; Drake, Tim; Wang, Yun F Wayne

    2014-01-01

    Rapid identification and antimicrobial susceptibility testing of microorganisms causing bloodstream infections or sepsis have the potential to improve patient care. This proof-of-principle study evaluates the Lysis-Filtration Method for identification as well as antimicrobial susceptibility testing of bacteria directly from positive blood culture bottles in a clinical setting. A total of 100 non-duplicated positive blood cultures were tested and 1012 microorganism-antimicrobial combinations were assessed. An aliquot of non-charcoal blood culture broth was incubated with lysis buffer briefly before being filtered and washed. Microorganisms recovered from the filter membrane were first identified by using Matrix-Assisted Laser Desorption/Ionization Time-of-Flight VITEK® Mass Spectrometry (VITEK MS). After quick identification from VITEK MS, filtered microorganisms were inoculated to VITEK®2 system for full panel antimicrobial susceptibility testing analysis. Of 100 bottles tested, the VITEK MS resulted in 94.0% correct organism identification to the species level. Compared to the conventional antimicrobial susceptibility testing methods, direct antimicrobial susceptibility testing from VITEK®2 resulted in 93.5% (946/1012) category agreement of antimicrobials tested, with 3.6% (36/1012) minor error, 1.7% (7/1012) major error, and 1.3% (13/1012) very major error of antimicrobials. The average time to identification and antimicrobial susceptibility testing was 11.4 hours by using the Lysis-Filtration method for both VITEK MS and VITEK®2 compared to 56.3 hours by using conventional methods (p<0.00001). Thus, the same-day results of microorganism identification and antimicrobial susceptibility testing directly from positive blood culture can be achieved and can be used for appropriate antibiotic therapy and antibiotic stewardship.

  9. Development and testing of a new ray-tracing approach to GNSS carrier-phase multipath modelling

    NASA Astrophysics Data System (ADS)

    Lau, Lawrence; Cross, Paul

    2007-11-01

    Multipath is one of the most important error sources in Global Navigation Satellite System (GNSS) carrier-phase-based precise relative positioning. Its theoretical maximum is a quarter of the carrier wavelength (about 4.8 cm for the Global Positioning System (GPS) L1 carrier) and, although it rarely reaches this size, it must clearly be mitigated if millimetre-accuracy positioning is to be achieved. In most static applications, this may be accomplished by averaging over a sufficiently long period of observation, but in kinematic applications, a modelling approach must be used. This paper is concerned with one such approach: the use of ray-tracing to reconstruct the error and therefore remove it. In order to apply such an approach, it is necessary to have a detailed understanding of the signal transmitted from the satellite, the reflection process, the antenna characteristics and the way that the reflected and direct signal are processed within the receiver. This paper reviews all of these and introduces a formal ray-tracing method for multipath estimation based on precise knowledge of the satellite reflector antenna geometry and of the reflector material and antenna characteristics. It is validated experimentally using GPS signals reflected from metal, water and a brick building, and is shown to be able to model most of the main multipath characteristics. The method will have important practical applications for correcting for multipath in well-constrained environments (such as at base stations for local area GPS networks, at International GNSS Service (IGS) reference stations, and on spacecraft), and it can be used to simulate realistic multipath errors for various performance analyses in high-precision positioning.

  10. Accuracy of sun localization in the second step of sky-polarimetric Viking navigation for north determination: a planetarium experiment.

    PubMed

    Farkas, Alexandra; Száz, Dénes; Egri, Ádám; Blahó, Miklós; Barta, András; Nehéz, Dóra; Bernáth, Balázs; Horváth, Gábor

    2014-07-01

    It is a widely discussed hypothesis that Viking seafarers might have been able to locate the position of the occluded sun by means of dichroic or birefringent crystals, the mysterious sunstones, with which they could analyze skylight polarization. Although the atmospheric optical prerequisites and certain aspects of the efficiency of this sky-polarimetric Viking navigation have been investigated, the accuracy of the main steps of this method has not been quantitatively examined. To fill in this gap, we present here the results of a planetarium experiment in which we measured the azimuth and elevation errors of localization of the invisible sun. In the planetarium sun localization was performed in two selected celestial points on the basis of the alignments of two small sections of two celestial great circles passing through the sun. In the second step of sky-polarimetric Viking navigation the navigator needed to determine the intersection of two such celestial circles. We found that the position of the sun (solar elevation θ(S), solar azimuth φ(S)) was estimated with an average error of +0.6°≤Δθ≤+8.8° and -3.9°≤Δφ≤+2.0°. We also calculated the compass direction error when the estimated sun position is used for orienting with a Viking sun-compass. The northern direction (ω(North)) was determined with an error of -3.34°≤Δω(North)≤+6.29°. The inaccuracy of the second step of this navigation method was high (Δω(North)=-16.3°) when the solar elevation was 5°≤θ(S)≤25°, and the two selected celestial points were far from the sun (at angular distances 95°≤γ(1), γ(2)≤115°) and each other (125°≤δ≤145°). Considering only this second step, the sky-polarimetric navigation could be more accurate in the mid-summer period (June and July), when in the daytime the sun is high above the horizon for long periods. In the spring (and autumn) equinoctial period, alternative methods (using a twilight board, for example) might be more appropriate. Since Viking navigators surely also committed further errors in the first and third steps, the orientation errors presented here underestimate the net error of the whole sky-polarimetric navigation.

  11. A Proposal for Automatic Fruit Harvesting by Combining a Low Cost Stereovision Camera and a Robotic Arm

    PubMed Central

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-01-01

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions. PMID:24984059

  12. A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm.

    PubMed

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-06-30

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions.

  13. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

  14. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System

    PubMed Central

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-01-01

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging. PMID:26343673

  15. Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.

    PubMed

    Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen

    2015-08-28

    The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.

  16. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  17. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Sen; Li, Guangjun; Wang, Maojie

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less

  18. A method for verification of treatment delivery in HDR prostate brachytherapy using a flat panel detector for both imaging and source tracking.

    PubMed

    Smith, Ryan L; Haworth, Annette; Panettieri, Vanessa; Millar, Jeremy L; Franich, Rick D

    2016-05-01

    Verification of high dose rate (HDR) brachytherapy treatment delivery is an important step, but is generally difficult to achieve. A technique is required to monitor the treatment as it is delivered, allowing comparison with the treatment plan and error detection. In this work, we demonstrate a method for monitoring the treatment as it is delivered and directly comparing the delivered treatment with the treatment plan in the clinical workspace. This treatment verification system is based on a flat panel detector (FPD) used for both pre-treatment imaging and source tracking. A phantom study was conducted to establish the resolution and precision of the system. A pretreatment radiograph of a phantom containing brachytherapy catheters is acquired and registration between the measurement and treatment planning system (TPS) is performed using implanted fiducial markers. The measured catheter paths immediately prior to treatment were then compared with the plan. During treatment delivery, the position of the (192)Ir source is determined at each dwell position by measuring the exit radiation with the FPD and directly compared to the planned source dwell positions. The registration between the two corresponding sets of fiducial markers in the TPS and radiograph yielded a registration error (residual) of 1.0 mm. The measured catheter paths agreed with the planned catheter paths on average to within 0.5 mm. The source positions measured with the FPD matched the planned source positions for all dwells on average within 0.6 mm (s.d. 0.3, min. 0.1, max. 1.4 mm). We have demonstrated a method for directly comparing the treatment plan with the delivered treatment that can be easily implemented in the clinical workspace. Pretreatment imaging was performed, enabling visualization of the implant before treatment delivery and identification of possible catheter displacement. Treatment delivery verification was performed by measuring the source position as each dwell was delivered. This approach using a FPD for imaging and source tracking provides a noninvasive method of acquiring extensive information for verification in HDR prostate brachytherapy.

  19. A method for verification of treatment delivery in HDR prostate brachytherapy using a flat panel detector for both imaging and source tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Ryan L., E-mail: ryan.smith@wbrc.org.au; Millar, Jeremy L.; Franich, Rick D.

    Purpose: Verification of high dose rate (HDR) brachytherapy treatment delivery is an important step, but is generally difficult to achieve. A technique is required to monitor the treatment as it is delivered, allowing comparison with the treatment plan and error detection. In this work, we demonstrate a method for monitoring the treatment as it is delivered and directly comparing the delivered treatment with the treatment plan in the clinical workspace. This treatment verification system is based on a flat panel detector (FPD) used for both pre-treatment imaging and source tracking. Methods: A phantom study was conducted to establish the resolutionmore » and precision of the system. A pretreatment radiograph of a phantom containing brachytherapy catheters is acquired and registration between the measurement and treatment planning system (TPS) is performed using implanted fiducial markers. The measured catheter paths immediately prior to treatment were then compared with the plan. During treatment delivery, the position of the {sup 192}Ir source is determined at each dwell position by measuring the exit radiation with the FPD and directly compared to the planned source dwell positions. Results: The registration between the two corresponding sets of fiducial markers in the TPS and radiograph yielded a registration error (residual) of 1.0 mm. The measured catheter paths agreed with the planned catheter paths on average to within 0.5 mm. The source positions measured with the FPD matched the planned source positions for all dwells on average within 0.6 mm (s.d. 0.3, min. 0.1, max. 1.4 mm). Conclusions: We have demonstrated a method for directly comparing the treatment plan with the delivered treatment that can be easily implemented in the clinical workspace. Pretreatment imaging was performed, enabling visualization of the implant before treatment delivery and identification of possible catheter displacement. Treatment delivery verification was performed by measuring the source position as each dwell was delivered. This approach using a FPD for imaging and source tracking provides a noninvasive method of acquiring extensive information for verification in HDR prostate brachytherapy.« less

  20. Forward and correctional OFDM-based visible light positioning

    NASA Astrophysics Data System (ADS)

    Li, Wei; Huang, Zhitong; Zhao, Runmei; He, Peixuan; Ji, Yuefeng

    2017-09-01

    Visible light positioning (VLP) has attracted much attention in both academic and industrial areas due to the extensive deployment of light-emitting diodes (LEDs) as next-generation green lighting. Generally, the coverage of a single LED lamp is limited, so LED arrays are always utilized to achieve uniform illumination within the large-scale indoor environment. However, in such dense LED deployment scenario, the superposition of the light signals becomes an important challenge for accurate VLP. To solve this problem, we propose a forward and correctional orthogonal frequency division multiplexing (OFDM)-based VLP (FCO-VLP) scheme with low complexity in generating and processing of signals. In the first forward procedure of FCO-VLP, an initial position is obtained by the trilateration method based on OFDM-subcarriers. The positioning accuracy will be further improved in the second correctional procedure based on the database of reference points. As demonstrated in our experiments, our approach yields an improved average positioning error of 4.65 cm and an enhanced positioning accuracy by 24.2% compared with trilateration method.

  1. Experimental Evaluation of UWB Indoor Positioning for Sport Postures

    PubMed Central

    Defraye, Jense; Steendam, Heidi; Gerlo, Joeri; De Clercq, Dirk; De Poorter, Eli

    2018-01-01

    Radio frequency (RF)-based indoor positioning systems (IPSs) use wireless technologies (including Wi-Fi, Zigbee, Bluetooth, and ultra-wide band (UWB)) to estimate the location of persons in areas where no Global Positioning System (GPS) reception is available, for example in indoor stadiums or sports halls. Of the above-mentioned forms of radio frequency (RF) technology, UWB is considered one of the most accurate approaches because it can provide positioning estimates with centimeter-level accuracy. However, it is not yet known whether UWB can also offer such accurate position estimates during strenuous dynamic activities in which moves are characterized by fast changes in direction and velocity. To answer this question, this paper investigates the capabilities of UWB indoor localization systems for tracking athletes during their complex (and most of the time unpredictable) movements. To this end, we analyze the impact of on-body tag placement locations and human movement patterns on localization accuracy and communication reliability. Moreover, two localization algorithms (particle filter and Kalman filter) with different optimizations (bias removal, non-line-of-sight (NLoS) detection, and path determination) are implemented. It is shown that although the optimal choice of optimization depends on the type of movement patterns, some of the improvements can reduce the localization error by up to 31%. Overall, depending on the selected optimization and on-body tag placement, our algorithms show good results in terms of positioning accuracy, with average errors in position estimates of 20 cm. This makes UWB a suitable approach for tracking dynamic athletic activities. PMID:29315267

  2. Modeling the probability distribution of positional errors incurred by residential address geocoding.

    PubMed

    Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard

    2007-01-10

    The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  3. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  4. Panel positioning error and support mechanism for a 30-m THz radio telescope

    NASA Astrophysics Data System (ADS)

    Yang, De-Hua; Okoh, Daniel; Zhou, Guo-Hua; Li, Ai-Hua; Li, Guo-Ping; Cheng, Jing-Quan

    2011-06-01

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio. Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.

  5. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  6. Expected accuracy of proximal and distal temperature estimated by wireless sensors, in relation to their number and position on the skin.

    PubMed

    Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni; Montagnese, Sara

    2017-01-01

    A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints.

  7. Expected accuracy of proximal and distal temperature estimated by wireless sensors, in relation to their number and position on the skin

    PubMed Central

    Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R.; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni

    2017-01-01

    A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints. PMID:28666029

  8. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  9. The calculation of average error probability in a digital fibre optical communication system

    NASA Astrophysics Data System (ADS)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  10. Characterisation of false-positive observations in botanical surveys

    PubMed Central

    2017-01-01

    Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972

  11. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  12. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part I: humidity

    NASA Astrophysics Data System (ADS)

    Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.

    2017-07-01

    This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.

  13. Evaluation of a GPS used in conjunction with aerial telemetry

    USGS Publications Warehouse

    Olexa, E.M.; Gogan, P.J.P.; Podruzny, K.M.; Eiler, John; Alcorn, Doris J.; Neuman, Michael R.

    2001-01-01

    We investigated the use of a non-correctable Global Positioning System (NGPS) in association with aerial telemetry to determine animal locations. Average error was determined for 3 components of the location process: use of a NGPS receiver on the ground, use of a NGPS receiver in a aircraft while flying over a visual marker, and use of the same receiver while flying over a location determined by standard aerial telemetry. Average errors were 45.3, 88.1 and 137.4 m, respectively. A directional bias of <35 m was present for the telemetry component only. Tests indicated that use of NGPS to determine aircraft, and thereby animal, location is an efficient alternative to interpolation from topographic maps. This method was more accurate than previously reported Long-Range Navigation system, version C (LORAN-C) and Argos satellite telemetry. It has utility in areas where animal-borne GPS receivers are not practical due to a combination of topography, canopy coverage, weight or cost of animal-borne GPS units. Use of NGPS technology in conjunction with aerial telemetry will provide the location accuracy required for identification of gross movement patterns and coarse-grained habitat use.

  14. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed Central

    Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.

    2016-01-01

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676

  15. Evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas

    USGS Publications Warehouse

    Medina, K.D.; Geiger, C.O.

    1984-01-01

    The results of an evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas are documented. Data uses and funding sources were identified for the 140 complete record streamflow-gaging stations operated in Kansas during 1983 with a budget of $793,780. As a result of the evaluation of the needs and uses of data from the stream-gaging program, it was found that the 140 gaging stations were needed to meet these data requirements. The average standard error of estimation of streamflow records was 20.8 percent, assuming the 1983 budget and operating schedule of 6-week interval visitations and based on 85 of the 140 stations. It was shown that this overall level of accuracy could be improved to 18.9 percent by altering the 1983 schedule of station visitations. A minimum budget of $760 ,000, with a corresponding average error of estimation of 24.9 percent, is required to operate the 1983 program. None of the stations investigated were suitable for the application of alternative methods for simulating discharge records. Improved instrumentation can have a very positive impact on streamflow uncertainties by decreasing lost record. (USGS)

  16. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed

    Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H

    2016-12-15

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.

  17. Experimental investigation of observation error in anuran call surveys

    USGS Publications Warehouse

    McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.

    2010-01-01

    Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.

  18. Non-Contact Thrust Stand Calibration Method for Repetitively-Pulsed Electric Thrusters

    NASA Technical Reports Server (NTRS)

    Wong, Andrea R.; Toftul, Alexandra; Polzin, Kurt A.; Pearson, J. Boise

    2011-01-01

    A thrust stand calibration technique for use in testing repetitively-pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoidal coil to produce a pulsed magnetic field that acts against the magnetic field produced by a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasisteady average deflection of the thrust stand arm away from the unforced or zero position can be related to the average applied force through a simple linear Hooke s law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other as the constant relating average deflection and average thrust match within the errors on the linear regression curve fit of the data. Quantitatively, the error on the calibration coefficient is roughly 1% of the coefficient value.

  19. Data-driven region-of-interest selection without inflating Type I error rate.

    PubMed

    Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard

    2017-01-01

    In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.

  20. Modeling and calculation of impact friction caused by corner contact in gear transmission

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Chen, Siyu

    2014-09-01

    Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.

  1. Extent of Continental Crust Thickening Derived From Gravity Profile Leading From Aden Towards the Dhala Plateau in the Yemen Trap Series

    NASA Astrophysics Data System (ADS)

    Blecha, V.

    2003-12-01

    Gravity profile trends NNW from Aden and terminates at the Dhala plateau formed by Tertiary volcanics often referred to as the Yemen Trap Series. The length of profile is 120 km. Profile consists of 366 gravity stations with average distance of 300 m between stations. The mean square error of Bouguer anomalies is 0.06 mGal. This final error includes errors of gravity and altitude measurements and error in terrain corrections. Altitudes along profile are ranging from 0 m a.s.l. in the south to 1400 m a.s.l. at the northern side of profile. In the central part of the Gulf of Aden occurs juvenile oceanic crust. Stretched continental crust is assumed on the coast. Regional gravity field decreases from +38 mGal on the coast in Aden to -126 mGal at mountains of the Dhala plateau. According to gravity modeling the decrease of 164 mGal in gravity is caused by 8 km continental crust thickening over the distance of 120 km. Regional gravity field is accompanied by local anomalies with amplitudes of tens of mGal. Sources of local anomalies are from S to N: coastal sediments (negative), Tertiary intrusions and volcanics within the Dhala graben (positive), Mesozoic sediments (negative) and Tertiary volcanics of the Dhala plateau (positive). Gravity profile is most detailed and most precise regional gravity measurement carried out in the southern tip of Arabia and brings new information about geology of the area with scarce geophysical data.

  2. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  3. Surprised at All the Entropy: Hippocampal, Caudate and Midbrain Contributions to Learning from Prediction Errors

    PubMed Central

    Schiffer, Anne-Marike; Ahlheim, Christiane; Wurm, Moritz F.; Schubotz, Ricarda I.

    2012-01-01

    Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts. PMID:22570715

  4. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    PubMed

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2018-02-01

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  5. Analysis of using EMG and mechanical sensors to enhance intent recognition in powered lower limb prostheses

    NASA Astrophysics Data System (ADS)

    Young, A. J.; Kuiken, T. A.; Hargrove, L. J.

    2014-10-01

    Objective. The purpose of this study was to determine the contribution of electromyography (EMG) data, in combination with a diverse array of mechanical sensors, to locomotion mode intent recognition in transfemoral amputees using powered prostheses. Additionally, we determined the effect of adding time history information using a dynamic Bayesian network (DBN) for both the mechanical and EMG sensors. Approach. EMG signals from the residual limbs of amputees have been proposed to enhance pattern recognition-based intent recognition systems for powered lower limb prostheses, but mechanical sensors on the prosthesis—such as inertial measurement units, position and velocity sensors, and load cells—may be just as useful. EMG and mechanical sensor data were collected from 8 transfemoral amputees using a powered knee/ankle prosthesis over basic locomotion modes such as walking, slopes and stairs. An offline study was conducted to determine the benefit of different sensor sets for predicting intent. Main results. EMG information was not as accurate alone as mechanical sensor information (p < 0.05) for any classification strategy. However, EMG in combination with the mechanical sensor data did significantly reduce intent recognition errors (p < 0.05) both for transitions between locomotion modes and steady-state locomotion. The sensor time history (DBN) classifier significantly reduced error rates compared to a linear discriminant classifier for steady-state steps, without increasing the transitional error, for both EMG and mechanical sensors. Combining EMG and mechanical sensor data with sensor time history reduced the average transitional error from 18.4% to 12.2% and the average steady-state error from 3.8% to 1.0% when classifying level-ground walking, ramps, and stairs in eight transfemoral amputee subjects. Significance. These results suggest that a neural interface in combination with time history methods for locomotion mode classification can enhance intent recognition performance; this strategy should be considered for future real-time experiments.

  6. CT, MR, and ultrasound image artifacts from prostate brachytherapy seed implants: The impact of seed size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Andrew K. H.; Basran, Parminder S.; Thomas, Steven D.

    Purpose: To investigate the effects of brachytherapy seed size on the quality of x-ray computed tomography (CT), ultrasound (US), and magnetic resonance (MR) images and seed localization through comparison of the 6711 and 9011 {sup 125}I sources. Methods: For CT images, an acrylic phantom mimicking a clinical implantation plan and embedded with low contrast regions of interest (ROIs) was designed for both the 0.774 mm diameter 6711 (standard) and the 0.508 mm diameter 9011 (thin) seed models (Oncura, Inc., and GE Healthcare, Arlington Heights, IL). Image quality metrics were assessed using the standard deviation of ROIs between the seeds andmore » the contrast to noise ratio (CNR) within the low contrast ROIs. For US images, water phantoms with both single and multiseed arrangements were constructed for both seed sizes. For MR images, both seeds were implanted into a porcine gel and imaged with pelvic imaging protocols. The standard deviation of ROIs and CNR values were used as metrics of artifact quantification. Seed localization within the CT images was assessed using the automated seed finder in a commercial brachytherapy treatment planning system. The number of erroneous seed placements and the average and maximum error in seed placements were recorded as metrics of the localization accuracy. Results: With the thin seeds, CT image noise was reduced from 48.5 {+-} 0.2 to 32.0 {+-} 0.2 HU and CNR improved by a median value of 74% when compared with the standard seeds. Ultrasound image noise was measured at 50.3 {+-} 17.1 dB for the thin seed images and 50.0 {+-} 19.8 dB for the standard seed images, and artifacts directly behind the seeds were smaller and less prominent with the thin seed model. For MR images, CNR of the standard seeds reduced on average 17% when using the thin seeds for all different imaging sequences and seed orientations, but these differences are not appreciable. Automated seed localization required an average ({+-}SD) of 7.0 {+-} 3.5 manual corrections in seed positions for the thin seed scans and 3.0 {+-} 1.2 manual corrections in seed positions for the standard seed scans. The average error in seed placement was 1.2 mm for both seed types and the maximum error in seed placement was 2.1 mm for the thin seed scans and 1.8 mm for the standard seed scans. Conclusions: The 9011 thin seeds yielded significantly improved image quality for CT and US images but no significant differences in MR image quality.« less

  7. SU-E-T-628: Predicted Risk of Post-Irradiation Cerebral Necrosis in Pediatric Brain Cancer Patients: A Treatment Planning Comparison of Proton Vs. Photon Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freund, D; Zhang, R; Sanders, M

    Purpose: Post-irradiation cerebral necrosis (PICN) is a severe late effect that can Result from brain cancers treatment using radiation therapy. The purpose of this study was to compare the treatment plans and predicted risk of PICN after volumetric modulated arc therapy (VMAT) to the risk after passively scattered proton therapy (PSPT) and intensity modulated proton therapy (IMPT) in a cohort of pediatric patients. Methods: Thirteen pediatric patients with varying age and sex were selected for this study. A clinical treatment volume (CTV) was constructed for 8 glioma patients and 5 ependymoma patients. Prescribed dose was 54 Gy over 30 fractionsmore » to the planning volume. Dosimetric endpoints were compared between VMAT and proton plans. The normal tissue complication probability (NTCP) following VMAT and proton therapy planning was also calculated using PICN as the biological endpoint. Sensitivity tests were performed to determine if predicted risk of PICN was sensitive to positional errors, proton range errors and selection of risk models. Results: Both PSPT and IMPT plans resulted in a significant increase in the maximum dose and reduction in the total brain volume irradiated to low doses compared with the VMAT plans. The average ratios of NTCP between PSPT and VMAT were 0.56 and 0.38 for glioma and ependymoma patients respectively and the average ratios of NTCP between IMPT and VMAT were 0.67 and 0.68 for glioma and ependymoma plans respectively. Sensitivity test revealed that predicted ratios of risk were insensitive to range and positional errors but varied with risk model selection. Conclusion: Both PSPT and IMPT plans resulted in a decrease in the predictive risk of necrosis for the pediatric plans studied in this work. Sensitivity analysis upheld the qualitative findings of the risk models used in this study, however more accurate models that take into account dose and volume are needed.« less

  8. Generation of fluoroscopic 3D images with a respiratory motion model based on an external surrogate signal

    NASA Astrophysics Data System (ADS)

    Hurwitz, Martina; Williams, Christopher L.; Mishra, Pankaj; Rottmann, Joerg; Dhou, Salam; Wagar, Matthew; Mannarino, Edward G.; Mak, Raymond H.; Lewis, John H.

    2015-01-01

    Respiratory motion during radiotherapy can cause uncertainties in definition of the target volume and in estimation of the dose delivered to the target and healthy tissue. In this paper, we generate volumetric images of the internal patient anatomy during treatment using only the motion of a surrogate signal. Pre-treatment four-dimensional CT imaging is used to create a patient-specific model correlating internal respiratory motion with the trajectory of an external surrogate placed on the chest. The performance of this model is assessed with digital and physical phantoms reproducing measured irregular patient breathing patterns. Ten patient breathing patterns are incorporated in a digital phantom. For each patient breathing pattern, the model is used to generate images over the course of thirty seconds. The tumor position predicted by the model is compared to ground truth information from the digital phantom. Over the ten patient breathing patterns, the average absolute error in the tumor centroid position predicted by the motion model is 1.4 mm. The corresponding error for one patient breathing pattern implemented in an anthropomorphic physical phantom was 0.6 mm. The global voxel intensity error was used to compare the full image to the ground truth and demonstrates good agreement between predicted and true images. The model also generates accurate predictions for breathing patterns with irregular phases or amplitudes.

  9. A combined time-of-flight and depth-of-interaction detector for total-body positron emission tomography.

    PubMed

    Berg, Eric; Roncali, Emilie; Kapusta, Maciej; Du, Junwei; Cherry, Simon R

    2016-02-01

    In support of a project to build a total-body PET scanner with an axial field-of-view of 2 m, the authors are developing simple, cost-effective block detectors with combined time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. This work focuses on investigating the potential of phosphor-coated crystals with conventional PMT-based block detector readout to provide DOI information while preserving timing resolution. The authors explored a variety of phosphor-coating configurations with single crystals and crystal arrays. Several pulse shape discrimination techniques were investigated, including decay time, delayed charge integration (DCI), and average signal shapes. Pulse shape discrimination based on DCI provided the lowest DOI positioning error: 2 mm DOI positioning error was obtained with single phosphor-coated crystals while 3-3.5 mm DOI error was measured with the block detector module. Minimal timing resolution degradation was observed with single phosphor-coated crystals compared to uncoated crystals, and a timing resolution of 442 ps was obtained with phosphor-coated crystals in the block detector compared to 404 ps without phosphor coating. Flood maps showed a slight degradation in crystal resolvability with phosphor-coated crystals; however, all crystals could be resolved. Energy resolution was degraded by 3%-7% with phosphor-coated crystals compared to uncoated crystals. These results demonstrate the feasibility of obtaining TOF-DOI capabilities with simple block detector readout using phosphor-coated crystals.

  10. Positivity, discontinuity, finite resources, and nonzero error for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2014-12-15

    This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuousmore » in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.« less

  11. Automated Classification of Selected Data Elements from Free-text Diagnostic Reports for Clinical Research.

    PubMed

    Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra

    2016-08-05

    In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.

  12. Indoor visible light communication localization system utilizing received signal strength indication technique and trilateration method

    NASA Astrophysics Data System (ADS)

    Mousa, Farag I. K.; Almaadeed, Noor; Busawon, Krishna; Bouridane, Ahmed; Binns, Richard; Elliot, Ian

    2018-01-01

    Visible light communication (VLC) based on light-emitting diodes (LEDs) technology not only provides higher data rate for indoor wireless communications and offering room illumination but also has the potential for indoor localization. VLC-based indoor positioning using the received optical power levels from emitting LEDs is investigated. We consider both scenarios of line-of-sight (LOS) and LOS with non-LOS (LOSNLOS) positioning. The performance of the proposed system is evaluated under both noisy and noiseless channel as is the impact of different location codes on positioning error. The analytical model of the system with noise and the corresponding numerical evaluation for a range of signal-to-noise ratio (SNR) are presented. The results show that an accuracy of <10 cm on average is achievable at an SNR>12 dB.

  13. Model-Based Angular Scan Error Correction of an Electrothermally-Actuated MEMS Mirror

    PubMed Central

    Zhang, Hao; Xu, Dacheng; Zhang, Xiaoyang; Chen, Qiao; Xie, Huikai; Li, Suiqiong

    2015-01-01

    In this paper, the actuation behavior of a two-axis electrothermal MEMS (Microelectromechanical Systems) mirror typically used in miniature optical scanning probes and optical switches is investigated. The MEMS mirror consists of four thermal bimorph actuators symmetrically located at the four sides of a central mirror plate. Experiments show that an actuation characteristics difference of as much as 4.0% exists among the four actuators due to process variations, which leads to an average angular scan error of 0.03°. A mathematical model between the actuator input voltage and the mirror-plate position has been developed to predict the actuation behavior of the mirror. It is a four-input, four-output model that takes into account the thermal-mechanical coupling and the differences among the four actuators; the vertical positions of the ends of the four actuators are also monitored. Based on this model, an open-loop control method is established to achieve accurate angular scanning. This model-based open loop control has been experimentally verified and is useful for the accurate control of the mirror. With this control method, the precise actuation of the mirror solely depends on the model prediction and does not need the real-time mirror position monitoring and feedback, greatly simplifying the MEMS control system. PMID:26690432

  14. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    PubMed

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  15. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    PubMed

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.

  16. Fermi LAT detection of enhanced gamma-ray emission from the Crab Nebula region

    NASA Astrophysics Data System (ADS)

    Ojha, Roopesh; Buehler, Rolf; Hays, Elizabeth; Dutka, Michael

    2012-07-01

    The Large Area Telescope (LAT), one of the two instruments on the Fermi Gamma-ray Space Telescope, has observed a significant increase in the gamma-ray activity from a source positionally consistent with the Crab Nebula on July 3, 2012. Preliminary LAT analysis indicates that the daily-averaged gamma-ray emission (E >100 MeV) from the direction of the Crab doubled from (2.4 +/- 0.5) x 10^-6 ph/cm2/sec (statistical errors only) on July 2nd to (5.5 +/- 0.7) x 10^-6 ph/cm2/sec on July 3rd, a factor of 2 greater than the average flux of (2.75 +/- 0.10) x 10^-6 ph/cm2/sec reported in the second Fermi LAT catalog (2FGL, Nolan et al.

  17. Improving laboratory data entry quality using Six Sigma.

    PubMed

    Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks

    2013-01-01

    The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.

  18. MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Followill, D; Howell, R

    Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less

  19. Position Error Covariance Matrix Validation and Correction

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  20. ADEPT, a dynamic next generation sequencing data error-detection program with trimming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Shihai; Lo, Chien-Chi; Li, Po-E

    Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less

  1. ADEPT, a dynamic next generation sequencing data error-detection program with trimming

    DOE PAGES

    Feng, Shihai; Lo, Chien-Chi; Li, Po-E; ...

    2016-02-29

    Illumina is the most widely used next generation sequencing technology and produces millions of short reads that contain errors. These sequencing errors constitute a major problem in applications such as de novo genome assembly, metagenomics analysis and single nucleotide polymorphism discovery. In this study, we present ADEPT, a dynamic error detection method, based on the quality scores of each nucleotide and its neighboring nucleotides, together with their positions within the read and compares this to the position-specific quality score distribution of all bases within the sequencing run. This method greatly improves upon other available methods in terms of the truemore » positive rate of error discovery without affecting the false positive rate, particularly within the middle of reads. We conclude that ADEPT is the only tool to date that dynamically assesses errors within reads by comparing position-specific and neighboring base quality scores with the distribution of quality scores for the dataset being analyzed. The result is a method that is less prone to position-dependent under-prediction, which is one of the most prominent issues in error prediction. The outcome is that ADEPT improves upon prior efforts in identifying true errors, primarily within the middle of reads, while reducing the false positive rate.« less

  2. Analyzing the prices of the most expensive sheet iron all over the world: Modeling, prediction and regime change

    NASA Astrophysics Data System (ADS)

    Song, Fu-Tie; Zhou, Wei-Xing

    2010-09-01

    The private car license plates issued in Shanghai are bestowed the title of “the most expensive sheet iron all over the world”, more expensive than gold. A citizen has to bid in a monthly auction to obtain a license plate for his new private car. We perform statistical analysis to investigate the influence of the minimal price Pmin of the bidding winners, the quota N of private car license plates, the number N of bidders, as well as two external shocks including the legality debate of the auction in 2004 and the auction regime reform in January 2008 on the average price P of all bidding winners. It is found that the legality debate of the auction had marginal transient impact on the average price in a short time period. In contrast, the change of the auction rules has significant permanent influence on the average price, which reduces the price by about 3020 yuan Renminbi. It means that the average price exhibits nonlinear behaviors with a regime change. The evolution of the average price is independent of the number N of bidders in both regimes. In the early regime before January 2008, the average price P was influenced only by the minimal price Pmin in the preceding month with a positive correlation. In the current regime since January 2008, the average price is positively correlated with the minimal price and the quota in the preceding month and negatively correlated with the quota in the same month. We test the predictive power of the two models using 2-year and 3-year moving windows and find that the latter outperforms the former. It seems that the auction market becomes more efficient after the auction reform since the prediction error increases.

  3. Flight calibration of compensated and uncompensated pitot-static airspeed probes and application of the probes to supersonic cruise vehicles

    NASA Technical Reports Server (NTRS)

    Webb, L. D.; Washington, H. P.

    1972-01-01

    Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.

  4. Precise Positioning Method for Logistics Tracking Systems Using Personal Handy-Phone System Based on Mahalanobis Distance

    NASA Astrophysics Data System (ADS)

    Yokoi, Naoaki; Kawahara, Yasuhiro; Hosaka, Hiroshi; Sakata, Kenji

    Focusing on the Personal Handy-phone System (PHS) positioning service used in physical distribution logistics, a positioning error offset method for improving positioning accuracy is invented. A disadvantage of PHS positioning is that measurement errors caused by the fluctuation of radio waves due to buildings around the terminal are large, ranging from several tens to several hundreds of meters. In this study, an error offset method is developed, which learns patterns of positioning results (latitude and longitude) containing errors and the highest signal strength at major logistic points in advance, and matches them with new data measured in actual distribution processes according to the Mahalanobis distance. Then the matching resolution is improved to 1/40 that of the conventional error offset method.

  5. Automatic brightness control of laser spot vision inspection system

    NASA Astrophysics Data System (ADS)

    Han, Yang; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2009-10-01

    The laser spot detection system aims to locate the center of the laser spot after long-distance transmission. The accuracy of positioning laser spot center depends very much on the system's ability to control brightness. In this paper, an automatic brightness control system with high-performance is designed using the device of FPGA. The brightness is controlled by combination of auto aperture (video driver) and adaptive exposure algorithm, and clear images with proper exposure are obtained under different conditions of illumination. Automatic brightness control system creates favorable conditions for positioning of the laser spot center later, and experiment results illuminate the measurement accuracy of the system has been effectively guaranteed. The average error of the spot center is within 0.5mm.

  6. A simulation of GPS and differential GPS sensors

    NASA Technical Reports Server (NTRS)

    Rankin, James M.

    1993-01-01

    The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.

  7. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring.

    PubMed

    Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J

    2012-11-21

    The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(t(i)) and the projected marker positions p(x(p), y(p); t(i)) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(x(p), y(p); t(i)) - P(θ(i)) · (aR(t(i)) + bR(t(i) - τ) + c)‖(2) with the projection operator P(θ(i)). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.

  8. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring

    NASA Astrophysics Data System (ADS)

    Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J.

    2012-11-01

    The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(ti) and the projected marker positions p(xp, yp; ti) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(xp, yp; ti) - P(θi) · (aR(ti) + bR(ti - τ) + c)‖2 with the projection operator P(θi). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.

  9. Psychometric properties of the Positive and Negative Affect Schedule (PANAS) in a heterogeneous sample of substance users.

    PubMed

    Serafini, Kelly; Malin-Mayor, Bo; Nich, Charla; Hunkele, Karen; Carroll, Kathleen M

    2016-03-01

    The Positive and Negative Affect Schedule (PANAS) is a widely used measure of affect. A comprehensive psychometric evaluation among substance users, however, has not been published. To examine the psychometric properties of the PANAS in a sample of outpatient treatment substance users. We used pooled data from four randomized clinical trials (N = 416; 34% female, 48% African American). A confirmatory factor analysis indicated adequate support for a two-factor correlated model comprised of Positive Affect and Negative Affect with correlated item errors (Comparative Fit Index = 0.93, Root Mean Square Error of Approximation = 0.07, χ(2) = 478.93, df = 156). Cronbach's α indicated excellent internal consistency for both factors (0.90 and 0.91, respectively). The PANAS factors had good convergence and discriminability (Composite Reliability > 0.7; Maximum Shared Variance < Average Variance Extracted). A comparison from baseline to Week 1 indicated acceptable test-retest reliability (Positive Affect = 0.80, Negative Affect = 0.76). Concurrent and discriminant validity were demonstrated with correlations with the Brief Symptom Inventory and Addiction Severity Index. The PANAS scores were also significantly correlated with treatment outcomes (e.g. Positive Affect was associated with the maximum days of consecutive abstinence from primary substance of abuse, r = 0.16, p = 0.001). Our data suggest that the psychometric properties of the PANAS are retained in substance using populations. Although several studies have focused on the role of Negative Affect, our findings suggest that Positive Affect may also be an important factor in substance use treatment outcomes.

  10. Limb position sense, proprioceptive drift and muscle thixotropy at the human elbow joint

    PubMed Central

    Tsay, A; Savage, G; Allen, T J; Proske, U

    2014-01-01

    These experiments on the human forearm are based on the hypothesis that drift in the perceived position of a limb over time can be explained by receptor adaptation. Limb position sense was measured in 39 blindfolded subjects using a forearm-matching task. A property of muscle, its thixotropy, a contraction history-dependent passive stiffness, was exploited to place muscle receptors of elbow muscles in a defined state. After the arm had been held flexed and elbow flexors contracted, we observed time-dependent changes in the perceived position of the reference arm by an average of 2.8° in the direction of elbow flexion over 30 s (Experiment 1). The direction of the drift reversed after the arm had been extended and elbow extensors contracted, with a mean shift of 3.5° over 30 s in the direction of elbow extension (Experiment 2). The time-dependent changes could be abolished by conditioning elbow flexors and extensors in the reference arm at the test angle, although this led to large position errors during matching (±10°), depending on how the indicator arm had been conditioned (Experiments 3 and 4). When slack was introduced in the elbow muscles of both arms, by shortening muscles after the conditioning contraction, matching errors became small and there was no drift in position sense (Experiments 5 and 6). These experiments argue for a receptor-based mechanism for proprioceptive drift and suggest that to align the two forearms, the brain monitors the difference between the afferent signals from the two arms. PMID:24665096

  11. Psychometric properties of the positive and negative affect schedule (PANAS) in a heterogeneous sample of substance users

    PubMed Central

    Serafini, Kelly; Malin-Mayor, Bo; Nich, Charla; Hunkele, Karen; Carroll, Kathleen M.

    2016-01-01

    Background The Positive and Negative Affect Schedule (PANAS) is a widely used measure of affect, and a comprehensive psychometric evaluation has never been conducted among substance users. Objective To examine the psychometric properties of the PANAS in a sample of outpatient treatment substance users. Methods We used pooled data from four randomized clinical trials (N = 416; 34% female, 48% African American). Results A confirmatory factor analysis indicated adequate support for a two-factor correlated model comprised of Positive Affect and Negative Affect with correlated item errors (Comparative Fit Index = .93, Root Mean Square Error of Approximation = .07, χ2 = 478.93, df = 156). Cronbach’s α indicated excellent internal consistency for both factors (.90 and .91, respectively). The PANAS factors had good convergence and discriminability (Composite Reliability >.7; Maximum Shared Variance < Average Variance Extracted). A comparison from baseline to Week 1 indicated acceptable test-retest reliability (Positive Affect = .80, Negative Affect = .76). Concurrent and discriminant validity were demonstrated with correlations with the Brief Symptom Inventory and Addiction Severity Index. The PANAS scores were also significantly correlated with treatment outcomes (e.g., Positive Affect was associated with the maximum days of consecutive abstinence from primary substance of abuse, r = .16, p = .001). Conclusion Our data suggest that the psychometric properties of the PANAS are retained in substance using populations. Although several studies have focused on the role of Negative Affect, our findings suggest that Positive Affect may also be an important factor in substance use treatment outcomes. PMID:26905228

  12. An Indoor Positioning Technique Based on a Feed-Forward Artificial Neural Network Using Levenberg-Marquardt Learning Method

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Gholami, A.; Azimi, S.

    2017-09-01

    This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.

  13. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  14. Average capacity optimization in free-space optical communication system over atmospheric turbulence channels with pointing errors.

    PubMed

    Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui

    2010-10-01

    A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.

  15. Error reduction study employing a pseudo-random binary sequence for use in acoustic pyrometry of gases

    NASA Astrophysics Data System (ADS)

    Ewan, B. C. R.; Ireland, S. N.

    2000-12-01

    Acoustic pyrometry uses the temperature dependence of sound speed in materials to measure temperature. This is normally achieved by measuring the transit time for a sound signal over a known path length and applying the material relation between temperature and velocity to extract an "average" temperature. Sources of error associated with the measurement of mean transit time are discussed in implementing the technique in gases, one of the principal causes being background noise in typical industrial environments. A number of transmitted signal and processing strategies which can be used in the area are examined and the expected error in mean transit time associated with each technique is quantified. Transmitted signals included pulses, pure frequencies, chirps, and pseudorandom binary sequences (prbs), while processing involves edge detection and correlation. Errors arise through the misinterpretation of the positions of edge arrival or correlation peaks due to instantaneous deviations associated with background noise and these become more severe as signal to noise amplitude ratios decrease. Population errors in the mean transit time are estimated for the different measurement strategies and it is concluded that PRBS combined with correlation can provide the lowest errors when operating in high noise environments. The operation of an instrument based on PRBS transmitted signals is described and test results under controlled noise conditions are presented. These confirm the value of the strategy and demonstrate that measurements can be made with signal to noise amplitude ratios down to 0.5.

  16. Realtime mitigation of GPS SA errors using Loran-C

    NASA Technical Reports Server (NTRS)

    Braasch, Soo Y.

    1994-01-01

    The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.

  17. Correlation between dry eye and refractive error in Saudi young adults using noninvasive Keratograph 4

    PubMed Central

    Fahmy, Rania M; Aldarwesh, Amal

    2018-01-01

    Purpose: The purpose is to study the correlation between dry eye and refractive errors in young adults using noninvasive Keratograph. Methods: In this cross sectional study, a total of 126 participants in the age range of 19–25 years and who were free of ocular surface disease, were recruited from King Saud University Campus. Refraction was defined by the spherical equivalent (SE) as the following: 49 emmetropic eyes (±0.50 SE), 48 myopic eyes (≤−0.75 SE and above), and 31 hyperopic eyes (>+0.75 SE). All participants underwent full ophthalmic examinations assessing their refractive status and dryness level including noninvasive breakup time (NIBUT) and tear meniscus height using Keratograph 4. Results: The prevalence of dry eye was 24.6%, 36.5%, and 17.4% in emmetropes, myopes, and hypermetropes, respectively. NIBUT has a negative correlation with hyperopia and a positive correlation with myopia with a significant reduction in the average NIBUT in myopes and hypermetropes in comparison to emmetropes. Conclusion: The current results succeeded to demonstrate a correlation between refractive errors and dryness level. PMID:29676308

  18. Optics measurement algorithms and error analysis for the proton energy frontier

    NASA Astrophysics Data System (ADS)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  19. Lunar crescent visibility

    NASA Technical Reports Server (NTRS)

    Doggett, Leroy E.; Schaefer, Bradley E.

    1994-01-01

    We report the results of five Moonwatches, in which more than 2000 observers throughout North America attempted to sight the thin lunar crescent. For each Moonwatch we were able to determine the position of the Lunar Date Line (LDL), the line along which a normal observer has a 50% probability of spotting the Moon. The observational LDLs were then compared with predicted LDLs derived from crescent visibility prediction algorithms. We find that ancient and medieval rules are higly unreliable. More recent empirical criteria, based on the relative altitude and azimuth of the Moon at the time of sunset, have a reasonable accuracy, with the best specific formulation being due to Yallop. The modern theoretical model by Schaefer (based on the physiology of the human eye and the local observing conditions) is found to have the least systematic error, the least average error, and the least maximum error of all models tested. Analysis of the observations also provided information about atmospheric, optical and human factors that affect the observations. We show that observational lunar calendars have a natural bias to begin early.

  20. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  1. Definition of an Enhanced Map-Matching Algorithm for Urban Environments with Poor GNSS Signal Quality.

    PubMed

    Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio

    2016-02-04

    Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle's location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent.

  2. Definition of an Enhanced Map-Matching Algorithm for Urban Environments with Poor GNSS Signal Quality

    PubMed Central

    Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio

    2016-01-01

    Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle’s location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent. PMID:26861320

  3. Calculating tumor trajectory and dose-of-the-day using cone-beam CT projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Bernard L., E-mail: bernard.jones@ucdenver.edu; Westerly, David; Miften, Moyed

    2015-02-15

    Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. The authors developed and validated a method which uses these projections to determine the trajectory of and dose to highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, the trajectory of which mimicked a lung tumor with high amplitude (up to 2.5 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each CBCT projection, and a Gaussian probability density function for themore » absolute BB position was calculated which best fit the observed trajectory of the BB in the imager geometry. Two modifications of the trajectory reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation (Phase), and second, using the Monte Carlo (MC) method to sample the estimated Gaussian tumor position distribution. The accuracies of the proposed methods were evaluated by comparing the known and calculated BB trajectories in phantom-simulated clinical scenarios using abdominal tumor volumes. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square trajectory errors averaged 3.8% ± 1.1% of the marker amplitude. Dosimetric calculations using Phase methods were more accurate, with mean absolute error less than 0.5%, and with error less than 1% in the highest-noise trajectory. MC-based trajectories prevent the overestimation of dose, but when viewed in an absolute sense, add a small amount of dosimetric error (<0.1%). Conclusions: Marker trajectory and target dose-of-the-day were accurately calculated using CBCT projections. This technique provides a method to evaluate highly mobile tumors using ordinary CBCT data, and could facilitate better strategies to mitigate or compensate for motion during stereotactic body radiotherapy.« less

  4. Effect of Body Mass Index on Magnitude of Setup Errors in Patients Treated With Adjuvant Radiotherapy for Endometrial Cancer With Daily Image Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lilie L., E-mail: lin@uphs.upenn.edu; Hertan, Lauren; Rengan, Ramesh

    2012-06-01

    Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed.more » To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.« less

  5. Spine detection in CT and MR using iterated marginal space learning.

    PubMed

    Michael Kelm, B; Wels, Michael; Kevin Zhou, S; Seifert, Sascha; Suehling, Michael; Zheng, Yefeng; Comaniciu, Dorin

    2013-12-01

    Examinations of the spinal column with both, Magnetic Resonance (MR) imaging and Computed Tomography (CT), often require a precise three-dimensional positioning, angulation and labeling of the spinal disks and the vertebrae. A fully automatic and robust approach is a prerequisite for an automated scan alignment as well as for the segmentation and analysis of spinal disks and vertebral bodies in Computer Aided Diagnosis (CAD) applications. In this article, we present a novel method that combines Marginal Space Learning (MSL), a recently introduced concept for efficient discriminative object detection, with a generative anatomical network that incorporates relative pose information for the detection of multiple objects. It is used to simultaneously detect and label the spinal disks. While a novel iterative version of MSL is used to quickly generate candidate detections comprising position, orientation, and scale of the disks with high sensitivity, the anatomical network selects the most likely candidates using a learned prior on the individual nine dimensional transformation spaces. Finally, we propose an optional case-adaptive segmentation approach that allows to segment the spinal disks and vertebrae in MR and CT respectively. Since the proposed approaches are learning-based, they can be trained for MR or CT alike. Experimental results based on 42 MR and 30 CT volumes show that our system not only achieves superior accuracy but also is among the fastest systems of its kind in the literature. On the MR data set the spinal disks of a whole spine are detected in 11.5s on average with 98.6% sensitivity and 0.073 false positive detections per volume. On the CT data a comparable sensitivity of 98.0% with 0.267 false positives is achieved. Detected disks are localized with an average position error of 2.4 mm/3.2 mm and angular error of 3.9°/4.5° in MR/CT, which is close to the employed hypothesis resolution of 2.1 mm and 3.3°. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    PubMed

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  7. Evaluation of the accuracy of the CyberKnife Synchrony™ Respiratory Tracking System using a plastic scintillator.

    PubMed

    Akino, Yuichi; Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshiichi; Hayashida, Miori; Mabuchi, Nobuhisa; Ogawa, Kazuhiko

    2018-06-01

    The Synchrony ™ Respiratory Tracking System of the CyberKnife ® Robotic Radiosurgery System (Accuray, Inc., Sunnyvale CA) enables real-time tracking of moving targets such as lung and liver tumors during radiotherapy. Although film measurements have been used for quality assurance of the tracking system, they cannot evaluate the temporal tracking accuracy. We have developed a verification system using a plastic scintillator that can evaluate the temporal accuracy of the CyberKnife Synchrony. A phantom consisting of a U-shaped plastic frame with three fiducial markers was used. The phantom was moved on a plastic scintillator plate. To identify the phantom position on the recording video in darkness, four pieces of fluorescent tape representing the corners of a 10 cm × 10 cm square around an 8 cm × 8 cm window were attached to the phantom. For a stable respiration model, the phantom was moved with the fourth power of a sinusoidal wave with breathing cycles of 4, 3, and 2 s and an amplitude of 1 cm. To simulate irregular breathing, the respiratory cycle was varied with Gaussian random numbers. A virtual target was generated at the center of the fluorescent markers using the MultiPlan ™ treatment planning system. Photon beams were irradiated using a fiducial tracking technique. In a dark room, the fluorescent light of the markers and the scintillation light of the beam position were recorded using a camera. For each video frame, a homography matrix was calculated from the four fluorescent marker positions, and the beam position derived from the scintillation light was corrected. To correct the displacement of the beam position due to oblique irradiation angles and other systematic measurement errors, offset values were derived from measurements with the phantom held stationary. The average SDs of beam position measured without phantom motion were 0.16 mm and 0.20 mm for lateral and longitudinal directions, respectively. For the stable respiration model, the tracking errors (mean ± SD) were 0.40 ± 0.64 mm, -0.07 ± 0.79 mm, and 0.45 ± 1.14 mm for breathing cycles of 4, 3, and 2 s, respectively. The tracking errors showed significant linear correlation with the phantom velocity. The correlation coefficients were 0.897, 0.913, and 0.957 for breathing cycles of 4, 3, and 2 s, respectively. The unstable respiration model also showed linear correlation between tracking errors and phantom velocity. The probability of tracking error incidents increased with decreasing length of the respiratory cycles. Although the tracking error incidents increased with larger variations in respiratory cycle, the effect on the cumulative probability was insignificant. For a respiratory cycle of 4 s, the maximum tracking error was 1.10 mm and 1.43 mm at the probability of 10% and 5%, respectively. Large tracking errors were observed when there was phase shift between the tumor and the LED marker. This technique allows evaluation of the motion tracking accuracy of the Synchrony ™ system over time by measurement of the photon beam. The velocity of the target and phase shift have significant effects on accuracy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. Stresses and elastic constants of crystalline sodium, from molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.

    1985-02-01

    The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less

  9. Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches

    NASA Technical Reports Server (NTRS)

    Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.

  10. Comparison of community and hospital pharmacists' attitudes and behaviors on medication error disclosure to the patient: A pilot study.

    PubMed

    Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C

    To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors depending on their particular practice setting. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  11. Innovations in Medication Preparation Safety and Wastage Reduction: Use of a Workflow Management System in a Pediatric Hospital.

    PubMed

    Davis, Stephen Jerome; Hurtado, Josephine; Nguyen, Rosemary; Huynh, Tran; Lindon, Ivan; Hudnall, Cedric; Bork, Sara

    2017-01-01

    Background: USP <797> regulatory requirements have mandated that pharmacies improve aseptic techniques and cleanliness of the medication preparation areas. In addition, the Institute for Safe Medication Practices (ISMP) recommends that technology and automation be used as much as possible for preparing and verifying compounded sterile products. Objective: To determine the benefits associated with the implementation of the workflow management system, such as reducing medication preparation and delivery errors, reducing quantity and frequency of medication errors, avoiding costs, and enhancing the organization's decision to move toward positive patient identification (PPID). Methods: At Texas Children's Hospital, data were collected and analyzed from January 2014 through August 2014 in the pharmacy areas in which the workflow management system would be implemented. Data were excluded for September 2014 during the workflow management system oral liquid implementation phase. Data were collected and analyzed from October 2014 through June 2015 to determine whether the implementation of the workflow management system reduced the quantity and frequency of reported medication errors. Data collected and analyzed during the study period included the quantity of doses prepared, number of incorrect medication scans, number of doses discontinued from the workflow management system queue, and the number of doses rejected. Data were collected and analyzed to identify patterns of incorrect medication scans, to determine reasons for rejected medication doses, and to determine the reduction in wasted medications. Results: During the 17-month study period, the pharmacy department dispensed 1,506,220 oral liquid and injectable medication doses. From October 2014 through June 2015, the pharmacy department dispensed 826,220 medication doses that were prepared and checked via the workflow management system. Of those 826,220 medication doses, there were 16 reported incorrect volume errors. The error rate after the implementation of the workflow management system averaged 8.4%, which was a 1.6% reduction. After the implementation of the workflow management system, the average number of reported oral liquid medication and injectable medication errors decreased to 0.4 and 0.2 times per week, respectively. Conclusion: The organization was able to achieve its purpose and goal of improving the provision of quality pharmacy care through optimal medication use and safety by reducing medication preparation errors. Error rates decreased and the workflow processes were streamlined, which has led to seamless operations within the pharmacy department. There has been significant cost avoidance and waste reduction and enhanced interdepartmental satisfaction due to the reduction of reported medication errors.

  12. Proprioceptive deficit in individuals with unilateral tearing of the anterior cruciate ligament after active evaluation of the sense of joint position.

    PubMed

    Cossich, Victor; Mallrich, Frédéric; Titonelli, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio

    2014-01-01

    To ascertain whether the proprioceptive deficit in the sense of joint position continues to be present when patients with a limb presenting a deficient anterior cruciate ligament (ACL) are assessed by testing their active reproduction of joint position, in comparison with the contralateral limb. Twenty patients with unilateral ACL tearing participated in the study. Their active reproduction of joint position in the limb with the deficient ACL and in the healthy contralateral limb was tested. Meta-positions of 20% and 50% of the maximum joint range of motion were used. Proprioceptive performance was determined through the values of the absolute error, variable error and constant error. Significant differences in absolute error were found at both of the positions evaluated, and in constant error at 50% of the maximum joint range of motion. When evaluated in terms of absolute error, the proprioceptive deficit continues to be present even when an active evaluation of the sense of joint position is made. Consequently, this sense involves activity of both intramuscular and tendon receptors.

  13. Synchronisation, electronic circuit implementation, and fractional-order analysis of 5D ordinary differential equations with hidden hyperchaotic attractors

    NASA Astrophysics Data System (ADS)

    Wei, Zhouchao; Rajagopal, Karthikeyan; Zhang, Wei; Kingni, Sifeu Takougang; Akgül, Akif

    2018-04-01

    Hidden hyperchaotic attractors can be generated with three positive Lyapunov exponents in the proposed 5D hyperchaotic Burke-Shaw system with only one stable equilibrium. To the best of our knowledge, this feature has rarely been previously reported in any other higher-dimensional systems. Unidirectional linear error feedback coupling scheme is used to achieve hyperchaos synchronisation, which will be estimated by using two indicators: the normalised average root-mean squared synchronisation error and the maximum cross-correlation coefficient. The 5D hyperchaotic system has been simulated using a specially designed electronic circuit and viewed on an oscilloscope, thereby confirming the results of the numerical integration. In addition, fractional-order hidden hyperchaotic system will be considered from the following three aspects: stability, bifurcation analysis and FPGA implementation. Such implementations in real time represent hidden hyperchaotic attractors with important consequences for engineering applications.

  14. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese.

    PubMed

    Saito, Akie; Inoue, Tomoyoshi

    2017-06-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the same position within a syllable or word. Most of the evidence for the effect comes from analyses of naturally occurring speech errors in Indo-European languages, and there are few studies examining the effect in experimentally elicited speech errors and in other languages. This study examined whether experimentally elicited sound errors in Japanese exhibits the syllable position effect. In Japanese, the sub-syllabic unit known as "mora" is considered to be a basic sound unit in production. Results showed that the syllable position effect occurred in mora errors, suggesting that the frame constrains the ordering of sounds during speech production.

  15. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.

  16. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    PubMed

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  17. Recall bias in the assessment of exposure to mobile phones.

    PubMed

    Vrijheid, Martine; Armstrong, Bruce K; Bédard, Daniel; Brown, Julianne; Deltour, Isabelle; Iavarone, Ivano; Krewski, Daniel; Lagorio, Susanna; Moore, Stephen; Richardson, Lesley; Giles, Graham G; McBride, Mary; Parent, Marie-Elise; Siemiatycki, Jack; Cardis, Elisabeth

    2009-05-01

    Most studies of mobile phone use are case-control studies that rely on participants' reports of past phone use for their exposure assessment. Differential errors in recalled phone use are a major concern in such studies. INTERPHONE, a multinational case-control study of brain tumour risk and mobile phone use, included validation studies to quantify such errors and evaluate the potential for recall bias. Mobile phone records of 212 cases and 296 controls were collected from network operators in three INTERPHONE countries over an average of 2 years, and compared with mobile phone use reported at interview. The ratio of reported to recorded phone use was analysed as measure of agreement. Mean ratios were virtually the same for cases and controls: both underestimated number of calls by a factor of 0.81 and overestimated call duration by a factor of 1.4. For cases, but not controls, ratios increased with increasing time before the interview; however, these trends were based on few subjects with long-term data. Ratios increased by level of use. Random recall errors were large. In conclusion, there was little evidence for differential recall errors overall or in recent time periods. However, apparent overestimation by cases in more distant time periods could cause positive bias in estimates of disease risk associated with mobile phone use.

  18. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  19. Servo control booster system for minimizing following error

    DOEpatents

    Wise, William L.

    1985-01-01

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  20. Global Surface Temperature Change and Uncertainties Since 1861

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.

  1. Detection of Multiple Innervation Zones from Multi-Channel Surface EMG Recordings with Low Signal-to-Noise Ratio Using Graph-Cut Segmentation.

    PubMed

    Marateb, Hamid Reza; Farahi, Morteza; Rojas, Monica; Mañanas, Miguel Angel; Farina, Dario

    2016-01-01

    Knowledge of the location of muscle Innervation Zones (IZs) is important in many applications, e.g. for minimizing the quantity of injected botulinum toxin for the treatment of spasticity or for deciding on the type of episiotomy during child delivery. Surface EMG (sEMG) can be noninvasively recorded to assess physiological and morphological characteristics of contracting muscles. However, it is not often possible to record signals of high quality. Moreover, muscles could have multiple IZs, which should all be identified. We designed a fully-automatic algorithm based on the enhanced image Graph-Cut segmentation and morphological image processing methods to identify up to five IZs in 60-ms intervals of very-low to moderate quality sEMG signal detected with multi-channel electrodes (20 bipolar channels with Inter Electrode Distance (IED) of 5 mm). An anisotropic multilayered cylinder model was used to simulate 750 sEMG signals with signal-to-noise ratio ranging from -5 to 15 dB (using Gaussian noise) and in each 60-ms signal frame, 1 to 5 IZs were included. The micro- and macro- averaged performance indices were then reported for the proposed IZ detection algorithm. In the micro-averaging procedure, the number of True Positives, False Positives and False Negatives in each frame were summed up to generate cumulative measures. In the macro-averaging, on the other hand, precision and recall were calculated for each frame and their averages are used to determine F1-score. Overall, the micro (macro)-averaged sensitivity, precision and F1-score of the algorithm for IZ channel identification were 82.7% (87.5%), 92.9% (94.0%) and 87.5% (90.6%), respectively. For the correctly identified IZ locations, the average bias error was of 0.02±0.10 IED ratio. Also, the average absolute conduction velocity estimation error was 0.41±0.40 m/s for such frames. The sensitivity analysis including increasing IED and reducing interpolation coefficient for time samples was performed. Meanwhile, the effect of adding power-line interference and using other image interpolation methods on the deterioration of the performance of the proposed algorithm was investigated. The average running time of the proposed algorithm on each 60-ms sEMG frame was 25.5±8.9 (s) on an Intel dual-core 1.83 GHz CPU with 2 GB of RAM. The proposed algorithm correctly and precisely identified multiple IZs in each signal epoch in a wide range of signal quality and is thus a promising new offline tool for electrophysiological studies.

  2. Detection of Multiple Innervation Zones from Multi-Channel Surface EMG Recordings with Low Signal-to-Noise Ratio Using Graph-Cut Segmentation

    PubMed Central

    Farahi, Morteza; Rojas, Monica; Mañanas, Miguel Angel; Farina, Dario

    2016-01-01

    Knowledge of the location of muscle Innervation Zones (IZs) is important in many applications, e.g. for minimizing the quantity of injected botulinum toxin for the treatment of spasticity or for deciding on the type of episiotomy during child delivery. Surface EMG (sEMG) can be noninvasively recorded to assess physiological and morphological characteristics of contracting muscles. However, it is not often possible to record signals of high quality. Moreover, muscles could have multiple IZs, which should all be identified. We designed a fully-automatic algorithm based on the enhanced image Graph-Cut segmentation and morphological image processing methods to identify up to five IZs in 60-ms intervals of very-low to moderate quality sEMG signal detected with multi-channel electrodes (20 bipolar channels with Inter Electrode Distance (IED) of 5 mm). An anisotropic multilayered cylinder model was used to simulate 750 sEMG signals with signal-to-noise ratio ranging from -5 to 15 dB (using Gaussian noise) and in each 60-ms signal frame, 1 to 5 IZs were included. The micro- and macro- averaged performance indices were then reported for the proposed IZ detection algorithm. In the micro-averaging procedure, the number of True Positives, False Positives and False Negatives in each frame were summed up to generate cumulative measures. In the macro-averaging, on the other hand, precision and recall were calculated for each frame and their averages are used to determine F1-score. Overall, the micro (macro)-averaged sensitivity, precision and F1-score of the algorithm for IZ channel identification were 82.7% (87.5%), 92.9% (94.0%) and 87.5% (90.6%), respectively. For the correctly identified IZ locations, the average bias error was of 0.02±0.10 IED ratio. Also, the average absolute conduction velocity estimation error was 0.41±0.40 m/s for such frames. The sensitivity analysis including increasing IED and reducing interpolation coefficient for time samples was performed. Meanwhile, the effect of adding power-line interference and using other image interpolation methods on the deterioration of the performance of the proposed algorithm was investigated. The average running time of the proposed algorithm on each 60-ms sEMG frame was 25.5±8.9 (s) on an Intel dual-core 1.83 GHz CPU with 2 GB of RAM. The proposed algorithm correctly and precisely identified multiple IZs in each signal epoch in a wide range of signal quality and is thus a promising new offline tool for electrophysiological studies. PMID:27978535

  3. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  4. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  5. SU-G-JeP3-05: Geometry Based Transperineal Ultrasound Probe Positioning for Image Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camps, S; With, P de; Verhaegen, F

    2016-06-15

    Purpose: The use of ultrasound (US) imaging in radiotherapy is not widespread, primarily due to the need for skilled operators performing the scans. Automation of probe positioning has the potential to remove this need and minimize operator dependence. We introduce an algorithm for obtaining a US probe position that allows good anatomical structure visualization based on clinical requirements. The first application is on 4D transperineal US images of prostate cancer patients. Methods: The algorithm calculates the probe position and orientation using anatomical information provided by a reference CT scan, always available in radiotherapy workflows. As initial test, we apply themore » algorithm on a CIRS pelvic US phantom to obtain a set of possible probe positions. Subsequently, five of these positions are randomly chosen and used to acquire actual US volumes of the phantom. Visual inspection of these volumes reveal if the whole prostate, and adjacent edges of bladder and rectum are fully visualized, as clinically required. In addition, structure positions on the acquired US volumes are compared to predictions of the algorithm. Results: All acquired volumes fulfill the clinical requirements as specified in the previous section. Preliminary quantitative evaluation was performed on thirty consecutive slices of two volumes, on which the structures are easily recognizable. The mean absolute distances (MAD) between actual anatomical structure positions and positions predicted by the algorithm were calculated. This resulted in MAD of 2.4±0.4 mm for prostate, 3.2±0.9 mm for bladder and 3.3±1.3 mm for rectum. Conclusion: Visual inspection and quantitative evaluation show that the algorithm is able to propose probe positions that fulfill all clinical requirements. The obtained MAD is on average 2.9 mm. However, during evaluation we assumed no errors in structure segmentation and probe positioning. In future steps, accurate estimation of these errors will allow for better evaluation of the achieved accuracy.« less

  6. Adaptive use of research aircraft data sets for hurricane forecasts

    NASA Astrophysics Data System (ADS)

    Biswas, M. K.; Krishnamurti, T. N.

    2008-02-01

    This study uses an adaptive observational strategy for hurricane forecasting. It shows the impacts of Lidar Atmospheric Sensing Experiment (LASE) and dropsonde data sets from Convection and Moisture Experiment (CAMEX) field campaigns on hurricane track and intensity forecasts. The following cases are used in this study: Bonnie, Danielle and Georges of 1998 and Erin, Gabrielle and Humberto of 2001. A single model run for each storm is carried out using the Florida State University Global Spectral Model (FSUGSM) with the European Center for Medium Range Weather Forecasts (ECMWF) analysis as initial conditions, in addition to 50 other model runs where the analysis is randomly perturbed for each storm. The centers of maximum variance of the DLM heights are located from the forecast error variance fields at the 84-hr forecast. Back correlations are then performed using the centers of these maximum variances and the fields at the 36-hr forecast. The regions having the highest correlations in the vicinity of the hurricanes are indicative of regions from where the error growth emanates and suggests the need for additional observations. Data sets are next assimilated in those areas that contain high correlations. Forecasts are computed using the new initial conditions for the storm cases, and track and intensity skills are then examined with respect to the control forecast. The adaptive strategy is capable of identifying sensitive areas where additional observations can help in reducing the hurricane track forecast errors. A reduction of position error by approximately 52% for day 3 of forecast (averaged over 7 storm cases) over the control runs is observed. The intensity forecast shows only a slight positive impact due to the model’s coarse resolution.

  7. Portable global positioning system receivers: static validity and environmental conditions.

    PubMed

    Duncan, Scott; Stewart, Tom I; Oliver, Melody; Mavoa, Suzanne; MacRae, Deborah; Badland, Hannah M; Duncan, Mitch J

    2013-02-01

    GPS receivers are becoming increasingly common as an objective measure of spatiotemporal movement in free-living populations; however, research into the effects of the surrounding physical environment on the accuracy of off-the-shelf GPS receivers is limited. The goal of the current study was to (1) determine the static validity of seven portable GPS receiver models under diverse environmental conditions and (2) compare the battery life and signal acquisition times among the models. Seven GPS models (three units of each) were placed on six geodetic sites subject to a variety of environmental conditions (e.g., open sky, high-rise buildings) on three separate occasions. The observed signal acquisition time and battery life of each unit were compared to advertised specifications. Data were collected and analyzed in June 2012. Substantial variation in positional error was observed among the seven GPS models, ranging from 12.1 ± 19.6 m to 58.8 ± 393.2 m when averaged across the three test periods and six geodetic sites. Further, mean error varied considerably among sites: the lowest error occurred at the site under open sky (7.3 ± 27.7 m), with the highest error at the site situated between high-rise buildings (59.2 ± 99.2 m). While observed signal acquisition times were generally longer than advertised, the differences between observed and advertised battery life were less pronounced. Results indicate that portable GPS receivers are able to accurately monitor static spatial location in unobstructed but not obstructed conditions. It also was observed that signal acquisition times were generally underestimated in advertised specifications. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  8. Wind adaptive modeling of transmission lines using minimum description length

    NASA Astrophysics Data System (ADS)

    Jaw, Yoonseok; Sohn, Gunho

    2017-03-01

    The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.

  9. Modeling and analysis of caves using voxelization

    NASA Astrophysics Data System (ADS)

    Szeifert, Gábor; Szabó, Tivadar; Székely, Balázs

    2014-05-01

    Although there are many ways to create three dimensional representations of caves using modern information technology methods, modeling of caves has been challenging for researchers for a long time. One of these promising new alternative modeling methods is using voxels. We are using geodetic measurements as an input for our voxelization project. These geodetic underground surveys recorded the azimuth, altitude and distance of corner points of cave systems relative to each other. The diameter of each cave section is estimated from separate databases originating from different surveys. We have developed a simple but efficient method (it covers more than 99.9 % of the volume of the input model on the average) to convert these vector-type datasets to voxels. We have also developed software components to make visualization of the voxel and vector models easier. Since each cornerpoint position is measured relative to another cornerpoints positions, propagation of uncertainties is an important issue in case of long caves with many separate sections. We are using Monte Carlo simulations to analyze the effect of the error of each geodetic instrument possibly involved in a survey. Cross-sections of the simulated three dimensional distributions show, that even tiny uncertainties of individual measurements can result in high variation of positions that could be reduced by distributing the closing errors if such data are available. Using the results of our simulations, we can estimate cave volume and the error of the calculated cave volume depending on the complexity of the cave. Acknowledgements: the authors are grateful to Ariadne Karst and Cave Exploring Association and State Department of Environmental and Nature Protection of the Hungarian Ministry of Rural Development, Department of National Parks and Landscape Protection, Section Landscape and Cave Protection and Ecotourism for providing the cave measurement data. BS contributed as an Alexander von Humboldt Research Fellow.

  10. SU-F-T-313: Clinical Results of a New Customer Acceptance Test for Elekta VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusk, B; Fontenot, J

    Purpose: To report the results of a customer acceptance test (CAT) for VMAT treatments for two matched Elekta linear accelerators. Methods: The CAT tests were performed on two clinically matched Elekta linear accelerators equipped with a 160-leaf MLC. Functional tests included performance checks of the control system during dynamic movements of the diaphragms, MLC, and gantry. Dosimetric tests included MLC picket fence tests at static and variable dose rates and a diaphragm alignment test, all performed using the on-board EPID. Additionally, beam symmetry during arc delivery was measured at the four cardinal angles for high and low dose rate modesmore » using a 2D detector array. Results of the dosimetric tests were analyzed using the VMAT CAT analysis tool. Results: Linear accelerator 1 (LN1) met all stated CAT tolerances. Linear accelerator 2 (LN2) passed the geometric, beam symmetry, and MLC position error tests but failed the relative dose average test for the diaphragm abutment and all three picket fence fields. Though peak doses in the abutment regions were consistent, the average dose was below the stated tolerance corresponding to a leaf junction that was too narrow. Despite this, no significant differences in patient specific VMAT quality assurance measured were observed between the accelerators and both passed monthly MLC quality assurance performed with the Hancock test. Conclusion: Results from the CAT showed LN2 with relative dose averages in the abutment regions of the diaphragm and MLC tests outside the tolerances resulting from differences in leaf gap distances. Tolerances of the dose average tests from the CAT may be small enough to detect MLC errors which do not significantly affect patient QA or the routine MLC tests.« less

  11. Electric field theory based approach to search-direction line definition in image segmentation: application to optimal femur-tibia cartilage segmentation in knee-joint 3-D MR

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Sonka, M.

    2010-03-01

    A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in 60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning errors, respectively).

  12. A periodic pattern of SNPs in the human genome

    PubMed Central

    Madsen, Bo Eskerod; Villesen, Palle; Wiuf, Carsten

    2007-01-01

    By surveying a filtered, high-quality set of SNPs in the human genome, we have found that SNPs positioned 1, 2, 4, 6, or 8 bp apart are more frequent than SNPs positioned 3, 5, 7, or 9 bp apart. The observed pattern is not restricted to genomic regions that are known to cause sequencing or alignment errors, for example, transposable elements (SINE, LINE, and LTR), tandem repeats, and large duplicated regions. However, we found that the pattern is almost entirely confined to what we define as “periodic DNA.” Periodic DNA is a genomic region with a high degree of periodicity in nucleotide usage. It turned out that periodic DNA is mainly small regions (average length 16.9 bp), widely distributed in the genome. Furthermore, periodic DNA has a 1.8 times higher SNP density than the rest of the genome and SNPs inside periodic DNA have a significantly higher genotyping error rate than SNPs outside periodic DNA. Our results suggest that not all SNPs in the human genome are created by independent single nucleotide mutations, and that care should be taken in analysis of SNPs from periodic DNA. The latter may have important consequences for SNP and association studies. PMID:17673700

  13. A soft kinetic data structure for lesion border detection.

    PubMed

    Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal

    2010-06-15

    The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.

  14. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Evaluation and modification of five techniques for estimating stormwater runoff for watersheds in west-central Florida

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.

    1996-01-01

    Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea

  16. Geodetic positioning using a global positioning system of satellites

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1980-01-01

    Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.

  17. Time-resolved dosimetry using a pinpoint ionization chamber as quality assurance for IMRT and VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louwe, Robert J. W., E-mail: rob.louwe@ccdbh.org.nz; Satherley, Thomas; Day, Rebecca A.

    Purpose: To develop a method to verify the dose delivery in relation to the individual control points of intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) using an ionization chamber. In addition to more effective problem solving during patient-specific quality assurance (QA), the aim is to eventually map out the limitations in the treatment chain and enable a targeted improvement of the treatment technique in an efficient way. Methods: Pretreatment verification was carried out for 255 treatment plans that included a broad range of treatment indications in two departments using the equipment of different vendors. In-house developed softwaremore » was used to enable calculation of the dose delivery for the individual beamlets in the treatment planning system (TPS), for data acquisition, and for analysis of the data. The observed deviations were related to various delivery and measurement parameters such as gantry angle, field size, and the position of the detector with respect to the field edge to distinguish between error sources. Results: The average deviation of the integral fraction dose during pretreatment verification of the planning target volume dose was −2.1% ± 2.2% (1 SD), −1.7% ± 1.7% (1 SD), and 0.0% ± 1.3% (1 SD) for IMRT at the Radboud University Medical Center (RUMC), VMAT (RUMC), and VMAT at the Wellington Blood and Cancer Centre, respectively. Verification of the dose to organs at risk gave very similar results but was generally subject to a larger measurement uncertainty due to the position of the detector at a high dose gradient. The observed deviations could be related to limitations of the TPS beam models, attenuation of the treatment couch, as well as measurement errors. The apparent systematic error of about −2% in the average deviation of the integral fraction dose in the RUMC results could be explained by the limitations of the TPS beam model in the calculation of the beam penumbra. Conclusions: This study showed that time-resolved dosimetry using an ionization chamber is feasible and can be largely automated which limits the required additional time compared to integrated dose measurements. It provides a unique QA method which enables identification and quantification of the contribution of various error sources during IMRT and VMAT delivery.« less

  18. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  19. Neural evidence for enhanced error detection in major depressive disorder.

    PubMed

    Chiu, Pearl H; Deldin, Patricia J

    2007-04-01

    Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.

  20. Impact of Orbit Position Errors on Future Satellite Gravity Models

    NASA Astrophysics Data System (ADS)

    Encarnacao, J.; Ditmar, P.; Klees, R.

    2015-12-01

    We present the results of a study of the impact of orbit positioning noise (OPN) caused by incomplete knowledge of the Earth's gravity field on gravity models estimated from satellite gravity data. The OPN is simulated as the difference between two sets of orbits integrated on the basis of different static gravity field models. The OPN is propagated into ll-SST data, here computed as averaged inter-satellite accelerations projected onto the Line of Sight (LoS) vector between the two satellites. We consider the cartwheel formation (CF), pendulum formation (PF), and trailing formation (TF) as they produce a different dominant orientation of the LoS vector. Given the polar orbits of the formations, the LoS vector is mainly aligned with the North-South direction in the TF, with the East-West direction in the PF (i.e. no along-track offset), and contains a radial component in the CF. An analytical analysis predicts that the CF suffers from a very high sensitivity to the OPN. This is a fundamental characteristic of this formation, which results from the amplification of this noise by diagonal components of the gravity gradient tensor (defined in the local frame) during the propagation into satellite gravity data. In contrast, the OPN in the data from PF and TF is only scaled by off-diagonal gravity gradient components, which are much smaller than the diagonal tensor components. A numerical analysis shows that the effect of the OPN is similar in the data collected by the TF and the PF. The amplification of the OPN errors for the CF leads to errors in the gravity model that are three orders of magnitude larger than those in case of the PF. This means that any implementation of the CF will most likely produce data with relatively low quality since this error dominates the error budget, especially at low frequencies. This is particularly critical for future gravimetric missions that will be equipped with highly accurate ranging sensors.

  1. Research on correction algorithm of laser positioning system based on four quadrant detector

    NASA Astrophysics Data System (ADS)

    Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia

    2018-02-01

    This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.

  2. SU-E-T-105: An FMEA Survey of Intensity Modulated Radiation Therapy (IMRT) Step and Shoot Dose Delivery Failure Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faught, J Tonigan; Johnson, J; Stingo, F

    2015-06-15

    Purpose: To assess the perception of TG-142 tolerance level dose delivery failures in IMRT and the application of FMEA process to this specific aspect of IMRT. Methods: An online survey was distributed to medical physicists worldwide that briefly described 11 different failure modes (FMs) covered by basic quality assurance in step- and-shoot IMRT at or near TG-142 tolerance criteria levels. For each FM, respondents estimated the worst case H&N patient percent dose error and FMEA scores for Occurrence, Detectability, and Severity. Demographic data was also collected. Results: 181 individual and three group responses were submitted. 84% were from North America.more » Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5–45 years (average 18 years). 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems and linear accelerator manufacturers were represented. All FMs received widely varying scores ranging from 1–10 for occurrence, at least 1–9 for detectability, and at least 1–7 for severity. Ranking FMs by RPN scores also resulted in large variability, with each FM being ranked both most risky (1st ) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (p<0.10) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors, or ranking. Conclusion: FMs investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 FMs was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, thus reflecting the subjective nature of the FMEA tool.« less

  3. SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defoor, D; Kabat, C; Papanikolaou, N

    Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 secondsmore » while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.« less

  4. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  5. Generalized site occupancy models allowing for false positive and false negative errors

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2006-01-01

    Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.

  6. Investigating the limitations of single breath-hold renal artery blood flow measurements using spiral phase contrast MR with R-R interval averaging.

    PubMed

    Steeden, Jennifer A; Muthurangu, Vivek

    2015-04-01

    1) To validate an R-R interval averaged golden angle spiral phase contrast magnetic resonance (RAGS PCMR) sequence against conventional cine PCMR for assessment of renal blood flow (RBF) in normal volunteers; and 2) To investigate the effects of motion and heart rate on the accuracy of flow measurements using an in silico simulation. In 20 healthy volunteers RAGS (∼6 sec breath-hold) and respiratory-navigated cine (∼5 min) PCMR were performed in both renal arteries to assess RBF. A simulation of RAGS PCMR was used to assess the effect of heart rate (30-105 bpm), vessel expandability (0-150%) and translational motion (x1.0-4.0) on the accuracy of RBF measurements. There was good agreement between RAGS and cine PCMR in the volunteer study (bias: 0.01 L/min, limits of agreement: -0.04 to +0.06 L/min, P = 0.0001). The simulation demonstrated a positive linear relationship between heart rate and error (r = 0.9894, P < 0.0001), a negative linear relationship between vessel expansion and error (r = -0.9484, P < 0.0001), and a nonlinear, heart rate-dependent relationship between vessel translation and error. We have demonstrated that RAGS PCMR accurately measures RBF in vivo. However, the simulation reveals limitations in this technique at extreme heart rates (<40 bpm, >100 bpm), or when there is significant motion (vessel expandability: >80%, vessel translation: >x2.2). © 2014 Wiley Periodicals, Inc.

  7. Sensitivity of CONUS Summer Rainfall to the Selection of Cumulus Parameterization Schemes in NU-WRF Seasonal Simulations

    NASA Technical Reports Server (NTRS)

    Iguchi, Takamichi; Tao, Wei-Kuo; Wu, Di; Peters-Lidard, Christa; Santanello, Joseph A.; Kemp, Eric; Tian, Yudong; Case, Jonathan; Wang, Weile; Ferraro, Robert; hide

    2017-01-01

    This study investigates the sensitivity of daily rainfall rates in regional seasonal simulations over the contiguous United States (CONUS) to different cumulus parameterization schemes. Daily rainfall fields were simulated at 24-km resolution using the NASA-Unified Weather Research and Forecasting (NU-WRF) Model for June-August 2000. Four cumulus parameterization schemes and two options for shallow cumulus components in a specific scheme were tested. The spread in the domain-mean rainfall rates across the parameterization schemes was generally consistent between the entire CONUS and most subregions. The selection of the shallow cumulus component in a specific scheme had more impact than that of the four cumulus parameterization schemes. Regional variability in the performance of each scheme was assessed by calculating optimally weighted ensembles that minimize full root-mean-square errors against reference datasets. The spatial pattern of the seasonally averaged rainfall was insensitive to the selection of cumulus parameterization over mountainous regions because of the topographical pattern constraint, so that the simulation errors were mostly attributed to the overall bias there. In contrast, the spatial patterns over the Great Plains regions as well as the temporal variation over most parts of the CONUS were relatively sensitive to cumulus parameterization selection. Overall, adopting a single simulation result was preferable to generating a better ensemble for the seasonally averaged daily rainfall simulation, as long as their overall biases had the same positive or negative sign. However, an ensemble of multiple simulation results was more effective in reducing errors in the case of also considering temporal variation.

  8. The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors

    NASA Technical Reports Server (NTRS)

    Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan

    1993-01-01

    Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.

  9. Plantar pressure cartography reconstruction from 3 sensors.

    PubMed

    Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc

    2014-01-01

    Foot problem diagnosis is often made by using pressure mapping systems, unfortunately located and used in the laboratories. In the context of e-health and telemedicine for home monitoring of patients having foot problems, our focus is to present an acceptable system for daily use. We developed an ambulatory instrumented insole using 3 pressures sensors to visualize plantar pressure cartographies. We show that a standard insole with fixed sensor position could be used for different foot sizes. The results show an average error measured at each pixel of 0.01 daN, with a standard deviation of 0.005 daN.

  10. The Whole Warps the Sum of Its Parts.

    PubMed

    Corbett, Jennifer E

    2017-01-01

    The efficiency of averaging properties of sets without encoding redundant details is analogous to gestalt proposals that perception is parsimoniously organized as a function of recurrent order in the world. This similarity suggests that grouping and averaging are part of a broader set of strategies allowing the visual system to circumvent capacity limitations. To examine how gestalt grouping affects the manner in which information is averaged and remembered, I compared the error in observers' adjustments of remembered sizes of individual circles in two different mean-size sets defined by similarity, proximity, connectedness, or a common region. Overall, errors were more similar within the same gestalt-defined groups than between different gestalt-defined groups, such that the remembered sizes of individual circles were biased toward the mean size of their respective gestalt-defined groups. These results imply that gestalt grouping facilitates perceptual averaging to minimize the error with which individual items are encoded, thereby optimizing the efficiency of visual short-term memory.

  11. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  12. A combined time-of-flight and depth-of-interaction detector for total-body positron emission tomography

    PubMed Central

    Berg, Eric; Roncali, Emilie; Kapusta, Maciej; Du, Junwei; Cherry, Simon R.

    2016-01-01

    Purpose: In support of a project to build a total-body PET scanner with an axial field-of-view of 2 m, the authors are developing simple, cost-effective block detectors with combined time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. Methods: This work focuses on investigating the potential of phosphor-coated crystals with conventional PMT-based block detector readout to provide DOI information while preserving timing resolution. The authors explored a variety of phosphor-coating configurations with single crystals and crystal arrays. Several pulse shape discrimination techniques were investigated, including decay time, delayed charge integration (DCI), and average signal shapes. Results: Pulse shape discrimination based on DCI provided the lowest DOI positioning error: 2 mm DOI positioning error was obtained with single phosphor-coated crystals while 3–3.5 mm DOI error was measured with the block detector module. Minimal timing resolution degradation was observed with single phosphor-coated crystals compared to uncoated crystals, and a timing resolution of 442 ps was obtained with phosphor-coated crystals in the block detector compared to 404 ps without phosphor coating. Flood maps showed a slight degradation in crystal resolvability with phosphor-coated crystals; however, all crystals could be resolved. Energy resolution was degraded by 3%–7% with phosphor-coated crystals compared to uncoated crystals. Conclusions: These results demonstrate the feasibility of obtaining TOF–DOI capabilities with simple block detector readout using phosphor-coated crystals. PMID:26843254

  13. A combined time-of-flight and depth-of-interaction detector for total-body positron emission tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Eric, E-mail: eberg@ucdavis.edu; Roncali, Emilie; Du, Junwei

    Purpose: In support of a project to build a total-body PET scanner with an axial field-of-view of 2 m, the authors are developing simple, cost-effective block detectors with combined time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. Methods: This work focuses on investigating the potential of phosphor-coated crystals with conventional PMT-based block detector readout to provide DOI information while preserving timing resolution. The authors explored a variety of phosphor-coating configurations with single crystals and crystal arrays. Several pulse shape discrimination techniques were investigated, including decay time, delayed charge integration (DCI), and average signal shapes. Results: Pulse shape discrimination based on DCI providedmore » the lowest DOI positioning error: 2 mm DOI positioning error was obtained with single phosphor-coated crystals while 3–3.5 mm DOI error was measured with the block detector module. Minimal timing resolution degradation was observed with single phosphor-coated crystals compared to uncoated crystals, and a timing resolution of 442 ps was obtained with phosphor-coated crystals in the block detector compared to 404 ps without phosphor coating. Flood maps showed a slight degradation in crystal resolvability with phosphor-coated crystals; however, all crystals could be resolved. Energy resolution was degraded by 3%–7% with phosphor-coated crystals compared to uncoated crystals. Conclusions: These results demonstrate the feasibility of obtaining TOF–DOI capabilities with simple block detector readout using phosphor-coated crystals.« less

  14. Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.

    PubMed

    Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing

    2016-01-01

    The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.

  15. Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery

    NASA Astrophysics Data System (ADS)

    Pozdin, Maksym A.; Skrinjar, Oskar

    2005-04-01

    This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.

  16. Adaptation of an articulated fetal skeleton model to three-dimensional fetal image data

    NASA Astrophysics Data System (ADS)

    Klinder, Tobias; Wendland, Hannes; Wachter-Stehle, Irina; Roundhill, David; Lorenz, Cristian

    2015-03-01

    The automatic interpretation of three-dimensional fetal images poses specific challenges compared to other three-dimensional diagnostic data, especially since the orientation of the fetus in the uterus and the position of the extremities is highly variable. In this paper, we present a comprehensive articulated model of the fetal skeleton and the adaptation of the articulation for pose estimation in three-dimensional fetal images. The model is composed out of rigid bodies where the articulations are represented as rigid body transformations. Given a set of target landmarks, the model constellation can be estimated by optimization of the pose parameters. Experiments are carried out on 3D fetal MRI data yielding an average error per case of 12.03+/-3.36 mm between target and estimated landmark positions.

  17. Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI

    NASA Astrophysics Data System (ADS)

    Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.

    2016-01-01

    In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.

  18. Spatial and temporal variability of fine particle composition and source types in five cities of Connecticut and Massachusetts

    PubMed Central

    Lee, Hyung Joo; Gent, Janneane F.; Leaderer, Brian P.; Koutrakis, Petros

    2011-01-01

    To protect public health from PM2.5 air pollution, it is critical to identify the source types of PM2.5 mass and chemical components associated with higher risks of adverse health outcomes. Source apportionment modeling using Positive Matrix Factorization (PMF), was used to identify PM2.5 source types and quantify the source contributions to PM2.5 in five cities of Connecticut and Massachusetts. Spatial and temporal variability of PM2.5 mass, components and source contributions were investigated. PMF analysis identified five source types: regional pollution as traced by sulfur, motor vehicle, road dust, oil combustion and sea salt. The sulfur-related regional pollution and traffic source type were major contributors to PM2.5. Due to sparse ground-level PM2.5 monitoring sites, current epidemiological studies are susceptible to exposure measurement errors. The higher correlations in concentrations and source contributions between different locations suggest less spatial variability, resulting in less exposure measurement errors. When concentrations and/or contributions were compared to regional averages, correlations were generally higher than between-site correlations. This suggests that for assigning exposures for health effects studies, using regional average concentrations or contributions from several PM2.5 monitors is more reliable than using data from the nearest central monitor. PMID:21429560

  19. Spatial autocorrelation among automated geocoding errors and its effects on testing for disease clustering

    PubMed Central

    Li, Jie; Fang, Xiangming

    2010-01-01

    Automated geocoding of patient addresses is an important data assimilation component of many spatial epidemiologic studies. Inevitably, the geocoding process results in positional errors. Positional errors incurred by automated geocoding tend to reduce the power of tests for disease clustering and otherwise affect spatial analytic methods. However, there are reasons to believe that the errors may often be positively spatially correlated and that this may mitigate their deleterious effects on spatial analyses. In this article, we demonstrate explicitly that the positional errors associated with automated geocoding of a dataset of more than 6000 addresses in Carroll County, Iowa are spatially autocorrelated. Furthermore, through two simulation studies of disease processes, including one in which the disease process is overlain upon the Carroll County addresses, we show that spatial autocorrelation among geocoding errors maintains the power of two tests for disease clustering at a level higher than that which would occur if the errors were independent. Implications of these results for cluster detection, privacy protection, and measurement-error modeling of geographic health data are discussed. PMID:20087879

  20. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  1. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  2. Consequences of Secondary Calibrations on Divergence Time Estimates.

    PubMed

    Schenk, John J

    2016-01-01

    Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.

  3. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  4. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  5. Cost effectiveness of the stream-gaging program in South Carolina

    USGS Publications Warehouse

    Barker, A.C.; Wright, B.C.; Bennett, C.S.

    1985-01-01

    The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)

  6. Chlorine isotope effects from isotope ratio mass spectrometry suggest intramolecular C-Cl bond competition in trichloroethene (TCE) reductive dehalogenation.

    PubMed

    Cretnik, Stefan; Bernstein, Anat; Shouakar-Stash, Orfan; Löffler, Frank; Elsner, Martin

    2014-05-20

    Chlorinated ethenes are prevalent groundwater contaminants. To better constrain (bio)chemical reaction mechanisms of reductive dechlorination, the position-specificity of reductive trichloroethene (TCE) dehalogenation was investigated. Selective biotransformation reactions (i) of tetrachloroethene (PCE) to TCE in cultures of Desulfitobacterium sp. strain Viet1; and (ii) of TCE to cis-1,2-dichloroethene (cis-DCE) in cultures of Geobacter lovleyi strain SZ were investigated. Compound-average carbon isotope effects were -19.0‰ ± 0.9‰ (PCE) and -12.2‰ ± 1.0‰ (TCE) (95% confidence intervals). Using instrumental advances in chlorine isotope analysis by continuous flow isotope ratio mass spectrometry, compound-average chorine isotope effects were measured for PCE (-5.0‰ ± 0.1‰) and TCE (-3.6‰ ± 0.2‰). In addition, position-specific kinetic chlorine isotope effects were determined from fits of reactant and product isotope ratios. In PCE biodegradation, primary chlorine isotope effects were substantially larger (by -16.3‰ ± 1.4‰ (standard error)) than secondary. In TCE biodegradation, in contrast, the product cis-DCE reflected an average isotope effect of -2.4‰ ± 0.3‰ and the product chloride an isotope effect of -6.5‰ ± 2.5‰, in the original positions of TCE from which the products were formed (95% confidence intervals). A greater difference would be expected for a position-specific reaction (chloride would exclusively reflect a primary isotope effect). These results therefore suggest that both vicinal chlorine substituents of TCE were reactive (intramolecular competition). This finding puts new constraints on mechanistic scenarios and favours either nucleophilic addition by Co(I) or single electron transfer as reductive dehalogenation mechanisms.

  7. Multi-GNSS signal-in-space range error assessment - Methodology and results

    NASA Astrophysics Data System (ADS)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  8. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  9. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning

    PubMed Central

    Deng, Zhongliang

    2018-01-01

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization. PMID:29361718

  10. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.

    PubMed

    Deng, Zhongliang; Fu, Xiao; Wang, Hanhua

    2018-01-20

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  11. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  12. Long-term changes in retinal contrast sensitivity in chicks from frosted occluders and drugs: relations to myopia?

    PubMed

    Diether, S; Schaeffel, F

    1999-07-01

    Experiments in animal models have shown that the retinal analyzes the image to identify the position of the plane of focus and fine-tunes the growth of the underlying sclera. It is fundamental to the understanding of the development of refractive errors to know which image features are processed. Since the position of the image plane fluctuates continuously with accommodative status and viewing distance, a meaningful control of refractive development can only occur by an averaging procedure with a long time constant. As a candidate for a retinal signal for enhanced eye growth and myopia we propose the level of contrast adaptation which varies with the average amount of defocus. Using a behavioural paradigm, we have found in chickens (1) that contrast adaptation (CA, here referred to as an increase in contrast sensitivity) occurs at low spatial frequencies (0.2 cyc/deg) already after 1.5 h of wearing frosted goggles which cause deprivation myopia, (2) that CA also occurs with negative lenses (-7.4D) and positive lenses (+6.9D) after 1.5 h, at least if accommodation is paralyzed and, (3) that CA occurs at a retinal level or has, at least, a retinal component. Furthermore, we have studied the effects of atropine and reserpine, which both suppress myopia development, on CA. Quisqualate, which causes retinal degeneration but leaves emmetropization functional, was also tested. We found that both atropine and reserpine increase contrast sensitivity to a level where no further CA could be induced by frosted goggles. Quisqualate increased only the variability of refractive development and of contrast sensitivity. Taken together, CA occurring during extended periods of defocus is a possible candidate for a retinal error signal for myopia development. However, the situation is complicated by the fact that there must be a second image processing mode generating a powerful inhibitory growth signal if the image is in front of the retina, even with poor images (Diether, S., & Schaeffel, F. (1999).

  13. A novel onset detection technique for brain-computer interfaces using sound-production related cognitive tasks in simulated-online system

    NASA Astrophysics Data System (ADS)

    Song, YoungJae; Sepulveda, Francisco

    2017-02-01

    Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies-Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs.

  14. Variations of pupil centration and their effects on video eye tracking.

    PubMed

    Wildenmann, Ulrich; Schaeffel, Frank

    2013-11-01

    To evaluate measurement errors that are introduced in video eye tracking when pupil centration changes with pupil size. Software was developed under Visual C++ to track both pupil centre and corneal centre at 87 Hz sampling rate at baseline pupil sizes of 4.75 mm (800 lux room illuminance) and while pupil constrictions were elicited by a flashlight. Corneal centres were determined by a circle fit through the pixels detected at the corneal margin by an edge detection algorithm. Standard deviations for repeated measurements were ± 0.04 mm for horizontal pupil centre position and ± 0.04 mm for horizontal corneal centre positions and ±0.03 mm for vertical pupil centre position and ± 0.05 mm for vertical corneal centre position. Ten subjects were tested (five female, five male, age 25-58 years). At 4 mm pupil sizes, the pupils were nasally decentred relative to the corneal centre by 0.18 ± 0.19 mm in the right eyes and -0.14 ± 0.22 mm in the left eyes. Vertical decentrations were 0.30 ± 0.30 mm and 0.27 ± 0.29 mm, respectively, always in a superior direction. At baseline pupil sizes (the natural pupil sizes at 800 lux) of 4.75 ± 0.52 mm, the decentrations became less (right and left eyes: horizontal 0.17 ± 0.20 mm and -0.12 ± 0.22 mm, and vertical 0.26 ± 0.28 mm and 0.20 ± 0.25 mm). While pupil decentration changed minimally in eight of the subjects, it shifted considerably in two others. Averaged over all subjects, the shift of the pupil centre position per millimetre pupil constriction was not significant (right and left eyes: -0.03 ± 0.07 mm and 0.03 ± 0.04 mm nasally per mm pupil size change, respectively, and -0.04 ± 0.06 mm and -0.05 ± 0.12 mm superiorly). Direction and magnitude of the changes in pupil centration could not be predicted from the initial decentration at baseline pupil sizes. In line with data in the literature, the pupil centre was significantly decentred relative to the corneal centre in the nasal and superior direction. Pupil decentration changed significantly with pupil size by 0.05 mm on average for 1 mm of constriction. Assuming a Hirschberg ratio of 12° mm(-1) , a shift of 0.05 mm is equivalent to a measurement error in a Purkinje image-based eye tracker of 0.6°. However, the induced measurement error could also exceed 1.5° in some subjects for only a 1 mm change in pupil size. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  15. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  16. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  17. Error framing effects on performance: cognitive, motivational, and affective pathways.

    PubMed

    Steele-Johnson, Debra; Kalinoski, Zachary T

    2014-01-01

    Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.

  18. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin

    USGS Publications Warehouse

    Walker, J.F.; Osen, L.L.; Hughes, P.E.

    1987-01-01

    A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%. 

  19. Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia

    PubMed Central

    Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.

    2009-01-01

    Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107

  20. Using beta binomials to estimate classification uncertainty for ensemble models.

    PubMed

    Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin

    2014-01-01

    Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.

  1. WE-G-BRD-08: End-To-End Targeting Accuracy of the Gamma Knife for Trigeminal Neuralgia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brezovich, I; Wu, X; Duan, J

    2014-06-15

    Purpose: Current QA procedures verify accuracy of individual equipment parameters, but may not include CT and MRI localizers. This study uses an end-to-end approach to measure the overall targeting errors in individual patients previously treated for trigeminal neuralgia. Methods: The trigeminal nerve is simulated by a 3 mm long, 3.175 mm (1/8 inch) diameter MRI contrast-filled cavity embedded within a PMMA plastic capsule. The capsule is positioned within the head frame such that the cavity position matches the Gamma Knife coordinates of 10 previously treated patients. Gafchromic EBT2 film is placed at the center of the cavity in coronal andmore » sagittal orientations. The films are marked with a pin prick to identify the cavity center. Treatments are planned for delivery with 4 mm collimators using MRI and CT scans acquired with the clinical localizer boxes and acquisition protocols. Coordinates of shots are chosen so that the cavity is centered within the 50% isodose volume. Following irradiation, the films are scanned and analyzed. Targeting errors are defined as the distance between the pin prick and the centroid of the 50% isodose line. Results: Averaged over 10 patient simulations, targeting errors along the x, y and z coordinates (patient left-to-right, posterior-anterior, head-to-foot) were, respectively, −0.060 +/− 0.363, −0.350 +/− 0.253, and 0.364 +/− 0.191 mm when MRI was used for treatment planning. Planning according to CT exhibited generally smaller errors, namely 0.109 +/− 0.167, −0.191 +/− 0.144, and 0.211 +/− 0.94 mm. The largest errors in MRI and CT planned treatments were, respectively, y = −0.761 and x = 0.428 mm. Conclusion: Unless patient motion or stronger MRI image distortion in actual treatments caused additional errors, all patients received the prescribed dose, i.e., the targeted section of the trig±eminal nerve was contained within the 50% isodose surface in all cases.« less

  2. Servo control booster system for minimizing following error

    DOEpatents

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  3. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  4. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  5. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  6. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  7. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  8. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  9. Analysis of basic clustering algorithms for numerical estimation of statistical averages in biomolecules.

    PubMed

    Anandakrishnan, Ramu; Onufriev, Alexey

    2008-03-01

    In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.

  10. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  11. Dizziness and unsteadiness following whiplash injury: characteristic features and relationship with cervical joint position error.

    PubMed

    Treleaven, Julia; Jull, Gwendolen; Sterling, Michele

    2003-01-01

    Dizziness and/or unsteadiness are common symptoms of chronic whiplash-associated disorders. This study aimed to report the characteristics of these symptoms and determine whether there was any relationship to cervical joint position error. Joint position error, the accuracy to return to the natural head posture following extension and rotation, was measured in 102 subjects with persistent whiplash-associated disorder and 44 control subjects. Whiplash subjects completed a neck pain index and answered questions about the characteristics of dizziness. The results indicated that subjects with whiplash-associated disorders had significantly greater joint position errors than control subjects. Within the whiplash group, those with dizziness had greater joint position errors than those without dizziness following rotation (rotation (R) 4.5 degrees (0.3) vs 2.9 degrees (0.4); rotation (L) 3.9 degrees (0.3) vs 2.8 degrees (0.4) respectively) and a higher neck pain index (55.3% (1.4) vs 43.1% (1.8)). Characteristics of the dizziness were consistent for those reported for a cervical cause but no characteristics could predict the magnitude of joint position error. Cervical mechanoreceptor dysfunction is a likely cause of dizziness in whiplash-associated disorder.

  12. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  13. Quantification and characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wood, Christopher J.; Gambetta, Jay M.

    2018-03-01

    We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.

  14. Improved estimation of anomalous diffusion exponents in single-particle tracking experiments

    NASA Astrophysics Data System (ADS)

    Kepten, Eldad; Bronshtein, Irena; Garini, Yuval

    2013-05-01

    The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.

  15. Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22

    DTIC Science & Technology

    the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.

  16. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  17. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    PubMed

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.

  18. Awareness of Diagnostic Error among Japanese Residents: a Nationwide Study.

    PubMed

    Nishizaki, Yuji; Shinozaki, Tomohiro; Kinoshita, Kensuke; Shimizu, Taro; Tokuda, Yasuharu

    2018-04-01

    Residents' understanding of diagnostic error may differ between countries. We sought to explore the relationship between diagnostic error knowledge and self-study, clinical knowledge, and experience. Our nationwide study involved postgraduate year 1 and 2 (PGY-1 and -2) Japanese residents. The Diagnostic Error Knowledge Assessment Test (D-KAT) and General Medicine In-Training Examination (GM-ITE) were administered at the end of the 2014 academic year. D-KAT scores were compared with the benchmark scores of US residents. Associations between D-KAT score and gender, PGY, emergency department (ED) rotations per month, mean number of inpatients handled at any given time, and mean daily minutes of self-study were also analyzed, both with and without adjusting for GM-ITE scores. Student's t test was used for comparisons with linear mixed models and structural equation models (SEM) to explore associations with D-KAT or GM-ITE scores. The mean D-KAT score among Japanese PGY-2 residents was significantly lower than that of their US PGY-2 counterparts (6.2 vs. 8.3, p < 0.001). GM-ITE scores correlated with ED rotations (≥6 rotations: 2.14; 0.16-4.13; p = 0.03), inpatient caseloads (5-9 patients: 1.79; 0.82-2.76; p < 0.001), and average daily minutes of self-study (≥91 min: 2.05; 0.56-3.53; p = 0.01). SEM revealed that D-KAT scores were directly associated with GM-ITE scores (ß = 0.37, 95% CI: 0.34-0.41) and indirectly associated with ED rotations (ß = 0.06, 95% CI: 0.02-0.10), inpatient caseload (ß = 0.04, 95% CI: 0.003-0.08), and average daily minutes of study (ß = 0.13, 95% CI: 0.09-0.17). Knowledge regarding diagnostic error among Japanese residents was poor compared with that among US residents. D-KAT scores correlated strongly with GM-ITE scores, and the latter scores were positively associated with a greater number of ED rotations, larger caseload (though only up to 15 patients), and more time spent studying.

  19. GPS Satellite Orbit Prediction at User End for Real-Time PPP System.

    PubMed

    Yang, Hongzhou; Gao, Yang

    2017-08-30

    This paper proposed the high-precision satellite orbit prediction process at the user end for the real-time precise point positioning (PPP) system. Firstly, the structure of a new real-time PPP system will be briefly introduced in the paper. Then, the generation of satellite initial parameters (IP) at the sever end will be discussed, which includes the satellite position, velocity, and the solar radiation pressure (SRP) parameters for each satellite. After that, the method for orbit prediction at the user end, with dynamic models including the Earth's gravitational force, lunar gravitational force, solar gravitational force, and the SRP, are presented. For numerical integration, both the single-step Runge-Kutta and multi-step Adams-Bashforth-Moulton integrator methods are implemented. Then, the comparison between the predicted orbit and the international global navigation satellite system (GNSS) service (IGS) final products are carried out. The results show that the prediction accuracy can be maintained for several hours, and the average prediction error of the 31 satellites are 0.031, 0.032, and 0.033 m for the radial, along-track and cross-track directions over 12 h, respectively. Finally, the PPP in both static and kinematic modes are carried out to verify the accuracy of the predicted satellite orbit. The average root mean square error (RMSE) for the static PPP of the 32 globally distributed IGS stations are 0.012, 0.015, and 0.021 m for the north, east, and vertical directions, respectively; while the RMSE of the kinematic PPP with the predicted orbit are 0.031, 0.069, and 0.167 m in the north, east and vertical directions, respectively.

  20. Validity of the microsoft kinect system in assessment of compensatory stepping behavior during standing and treadmill walking.

    PubMed

    Shani, Guy; Shapiro, Amir; Oded, Goldstein; Dima, Kagan; Melzer, Itshak

    2017-01-01

    Rapid compensatory stepping plays an important role in preventing falls when balance is lost; however, these responses cannot be accurately quantified in the clinic. The Microsoft Kinect™ system provides real-time anatomical landmark position data in three dimensions (3D), which may bridge this gap. Compensatory stepping reactions were evoked in 8 young adults by a sudden platform horizontal motion on which the subject stood or walked on a treadmill. The movements were recorded with both a 3D-APAS motion capture and Microsoft Kinect™ systems. The outcome measures consisted of compensatory step times (milliseconds) and length (centimeters). The average values of two standing and walking trials for Microsoft Kinect™ and the 3D-APAS systems were compared using t -test, Pearson's correlation, Altman-bland plots, and the average difference of root mean square error (RMSE) of joint position. The Microsoft Kinect™ had high correlations for the compensatory step times ( r  = 0.75-0.78, p  = 0.04) during standing and moderate correlations for walking ( r  = 0.53-0.63, p  = 0.05). The step length, however had a very high correlations for both standing and walking ( r  > 0.97, p  = 0.01). The RMSE showed acceptable differences during the perturbation trials with smallest relative error in anterior-posterior direction (2-3%) and the highest in the vertical direction (11-13%). No systematic bias were evident in the Bland and Altman graphs. The Microsoft Kinect™ system provides comparable data to a video-based 3D motion analysis system when assessing step length and less accurate but still clinically acceptable for step times during balance recovery when balance is lost and fall is initiated.

  1. GPS Satellite Orbit Prediction at User End for Real-Time PPP System

    PubMed Central

    Yang, Hongzhou; Gao, Yang

    2017-01-01

    This paper proposed the high-precision satellite orbit prediction process at the user end for the real-time precise point positioning (PPP) system. Firstly, the structure of a new real-time PPP system will be briefly introduced in the paper. Then, the generation of satellite initial parameters (IP) at the sever end will be discussed, which includes the satellite position, velocity, and the solar radiation pressure (SRP) parameters for each satellite. After that, the method for orbit prediction at the user end, with dynamic models including the Earth’s gravitational force, lunar gravitational force, solar gravitational force, and the SRP, are presented. For numerical integration, both the single-step Runge–Kutta and multi-step Adams–Bashforth–Moulton integrator methods are implemented. Then, the comparison between the predicted orbit and the international global navigation satellite system (GNSS) service (IGS) final products are carried out. The results show that the prediction accuracy can be maintained for several hours, and the average prediction error of the 31 satellites are 0.031, 0.032, and 0.033 m for the radial, along-track and cross-track directions over 12 h, respectively. Finally, the PPP in both static and kinematic modes are carried out to verify the accuracy of the predicted satellite orbit. The average root mean square error (RMSE) for the static PPP of the 32 globally distributed IGS stations are 0.012, 0.015, and 0.021 m for the north, east, and vertical directions, respectively; while the RMSE of the kinematic PPP with the predicted orbit are 0.031, 0.069, and 0.167 m in the north, east and vertical directions, respectively. PMID:28867771

  2. A high-accuracy two-position alignment inertial navigation system for lunar rovers aided by a star sensor with a calibration and positioning function

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming

    2016-12-01

    An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.

  3. ERROR COMPENSATOR FOR A POSITION TRANSDUCER

    DOEpatents

    Fowler, A.H.

    1962-06-12

    A device is designed for eliminating the effect of leadscrew errors in positioning machines in which linear motion of a slide is effected from rotary motion of a leadscrew. This is accomplished by providing a corrector cam mounted on the slide, a cam follower, and a transducer housing rotatable by the follower to compensate for all the reproducible errors in the transducer signal which can be related to the slide position. The transducer has an inner part which is movable with respect to the transducer housing. The transducer inner part is coupled to the means for rotating the leadscrew such that relative movement between this part and its housing will provide an output signal proportional to the position of the slide. The corrector cam and its follower perform the compensation by changing the angular position of the transducer housing by an amount that is a function of the slide position and the error at that position. (AEC)

  4. Usability of devices for self-injection: results of a formative study on a new disposable pen injector.

    PubMed

    Lange, Jakob; Richard, Philipp; Bradley, Nick

    2014-01-01

    This article presents a late-stage formative usability study of a pen-injector platform device. Such devices are used for the subcutaneous delivery of biopharmaceuticals, primarily for self-administration by the patient. The study was conducted with a broad user population, defined to represent user characteristics across a range of indications. The goals of the study were to confirm that the pen could be used without recurring patterns of use errors leading to hazardous situations, to evaluate the comprehension of the instructions for use (IFU), and to determine if training is necessary. In the study, a total of 36 participants in six groups (health care providers, caregivers, adolescents, diabetics with retinopathy, diabetics with neuropathy, and patients with arthritis) each read the IFU, prepared the device, and performed two simulated injections into an injection pad. Any use errors, near misses, or deviations from the IFU procedure were recorded. The overall success rate (injection completed by the participant without need for assistance) was 94% for the first and 100% for the second injection. Ninety-two percent of the participants reported that they felt confident using the device, 100% found the IFU helpful, and 75% found the device positively comfortable to use. Overall, a total average of 3.35 deviations and errors per user and injection were recorded (there were no near misses). Subtracting the errors without any potential for negative consequences for the injection or the user (trivial deviations), as well as those related to attaching and removing the pen needle (independent of the design of the pen itself), led to an average of 1.31 potentially relevant deviations per user and injection. It was concluded that the pen injector together with the IFU could be safely and efficiently used by all user groups without any training, and thus that the device and IFU in their current form are well suited for use in a range of specific applications.

  5. Design and performance evaluation of a master controller for endovascular catheterization.

    PubMed

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-01-01

    It is difficult to manipulate a flexible catheter to target a position within a patient's complicated and delicate vessels. However, few researchers focused on the controller designs with much consideration of the natural catheter manipulation skills obtained from manual catheterization. Also, the existing catheter motion measurement methods probably lead to the difficulties in designing the force feedback device. Additionally, the commercially available systems are too expensive which makes them cost prohibitive to most hospitals. This paper presents a simple and cost-effective master controller for endovascular catheterization that can allow the interventionalists to apply the conventional pull, push and twist of the catheter used in current practice. A catheter-sensing unit (used to measure the motion of the catheter) and a force feedback unit (used to provide a sense of resistance force) are both presented. A camera was used to allow a contactless measurement avoiding additional friction, and the force feedback in the axial direction was provided by the magnetic force generated between the permanent magnets and the powered coil. Performance evaluation of the controller was evaluated by first conducting comparison experiments to quantify the accuracy of the catheter-sensing unit, and then conducting several experiments to evaluate the force feedback unit. From the experimental results, the minimum and the maximum errors of translational displacement were 0.003 mm (0.01 %) and 0.425 mm (1.06 %), respectively. The average error was 0.113 mm (0.28 %). In terms of rotational angles, the minimum and the maximum errors were 0.39°(0.33 %) and 7.2°(6 %), respectively. The average error was 3.61°(3.01 %). The force resolution was approximately 25 mN and a maximum current of 3A generated an approximately 1.5 N force. Based on analysis of requirements and state-of-the-art computer-assisted and robot-assisted training systems for endovascular catheterization, a new master controller with force feedback interface was proposed to maintain the natural endovascular catheterization skills of the interventionalists.

  6. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    PubMed

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  7. Disclosure of Medical Errors: What Factors Influence How Patients Respond?

    PubMed Central

    Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H

    2006-01-01

    BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770

  8. The Impact of Subsampling on MODIS Level-3 Statistics of Cloud Optical Thickness and Effective Radius

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros

    2004-01-01

    The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.

  9. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  10. Interpreting the Latitudinal Structure of Differences Between Modeled and Observed Temperature Trends (Invited)

    NASA Astrophysics Data System (ADS)

    Santer, B. D.; Mears, C. A.; Gleckler, P. J.; Solomon, S.; Wigley, T.; Arblaster, J.; Cai, W.; Gillett, N. P.; Ivanova, D. P.; Karl, T. R.; Lanzante, J.; Meehl, G. A.; Stott, P.; Taylor, K. E.; Thorne, P.; Wehner, M. F.; Zou, C.

    2010-12-01

    We perform the most comprehensive comparison to date of simulated and observed temperature trends. Comparisons are made for different latitude bands, timescales, and temperature variables, using information from a multi-model archive and a variety of observational datasets. Our focus is on temperature changes in the lower troposphere (TLT), the mid- to upper troposphere (TMT), and at the sea surface (SST). For SST, TLT, and TMT, trend comparisons over the satellite era (1979 to 2009) always yield closest agreement in mid-latitudes of the Northern Hemisphere. There are pronounced discrepancies in the tropics and in the Southern Hemisphere: in both regions, the multi-model average warming is consistently larger than observed. At high latitudes in the Northern Hemisphere, the observed tropospheric warming exceeds multi-model average trends. The similarity in the latitudinal structure of this discrepancy pattern across different temperature variables and observational data sets suggests that these trend differences are real, and are not due to residual inhomogeneities in the observations. The interpretation of these results is hampered by the fact that the CMIP-3 multi-model archive analyzed here convolves errors in key external forcings with errors in the model response to forcing. Under a "forcing error" interpretation, model-average temperature trends in the Southern Hemisphere extratropics are biased warm because many models neglect (and/or inaccurately specify) changes in stratospheric ozone and the indirect effects of aerosols. An alternative "response error" explanation for the model trend errors is that there are fundamental problems with model clouds and ocean heat uptake over the Southern Ocean. When SST changes are compared over the longer period 1950 to 2009, there is close agreement between simulated and observed trends poleward of 50°S. This result is difficult to reconcile with the hypothesis that the trend discrepancies over 1979 to 2009 are primarily attributable to response errors. Our results suggest that biases in multi-model average temperature trends over the satellite era can be plausibly linked to forcing errors. Better partitioning of the forcing and response components of model errors will require a systematic program of numerical experimentation, with a focus on exploring the climate response to uncertainties in key historical forcings.

  11. Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor

    2015-04-01

    For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.

  12. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  13. Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winter, Jeff D.; Wong, Raimond; Swaminath, Anand

    Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less

  14. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.

    PubMed

    Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew

    2014-07-08

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.

  15. Comparison of ArcGIS and SAS Geostatistical Analyst to Estimate Population-Weighted Monthly Temperature for US Counties.

    PubMed

    Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang

    Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.

  16. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation

    PubMed Central

    2015-01-01

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441

  17. Performance of some numerical Laplace inversion methods on American put option formula

    NASA Astrophysics Data System (ADS)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  18. Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation

    NASA Astrophysics Data System (ADS)

    Huang, Aiping; Tao, Linwei; Niu, Yilong

    2018-04-01

    In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.

  19. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  20. Geometric validation of MV topograms for patient localization on TomoTherapy

    NASA Astrophysics Data System (ADS)

    Blanco Kiely, Janid P.; White, Benjamin M.; Low, Daniel A.; Qi, Sharon X.

    2016-01-01

    Our goal was to geometrically validate the use of mega-voltage orthogonal scout images (MV topograms) as a fast and low-dose alternative to mega-voltage computed tomography (MVCT) for daily patient localization on the TomoTherapy system. To achieve this, anthropomorphic head and pelvis phantoms were imaged on a 16-slice kilo-voltage computed tomography (kVCT) scanner to synthesize kilo-voltage digitally reconstructed topograms (kV-DRT) in the Tomotherapy detector geometry. MV topograms were generated for couch speeds of 1-4 cm s-1 in 1 cm s-1 increments with static gantry angles in the anterior-posterior and left-lateral directions. Phantoms were rigidly translated in the anterior-posterior (AP), superior-inferior (SI), and lateral (LAT) directions to simulate potential setup errors. Image quality improvement was demonstrated by estimating the noise level in the unenhanced and enhanced MV topograms using a principle component analysis-based noise level estimation algorithm. Average noise levels for the head phantom were reduced by 2.53 HU (AP) and 0.18 HU (LAT). The pelvis phantom exhibited average noise level reduction of 1.98 HU (AP) and 0.48 HU (LAT). Mattes Mutual Information rigid registration was used to register enhanced MV topograms with corresponding kV-DRT. Registration results were compared to the known rigid displacements, which assessed the MV topogram localization’s sensitivity to daily positioning errors. Reduced noise levels in the MV topograms enhanced the registration results so that registration errors were  <1 mm. The unenhanced head MV topograms had discrepancies  <2.1 mm and the pelvis topograms had discrepancies  <2.7 mm. Result were found to be consistent regardless of couch speed. In total, 64.7% of the head phantom MV topograms and 60.0% of the pelvis phantom MV topograms exactly measured the phantom offsets. These consistencies demonstrated the potential for daily patient positioning using MV topogram pairs in the context bony-anatomy based procedures such as total marrow irradiation, total body irradiation, and cranial spinal irradiation.

  1. Using video recording to identify management errors in pediatric trauma resuscitation.

    PubMed

    Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon

    2006-03-01

    To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.

  2. SU-G-JeP4-05: Effects of Irregular Respiratory Motion On the Positioning Accuracy of Moving Target with Free Breathing Cone-Beam Computerized Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X; Xiong, W; Gewanter, R

    Purpose: Average or maximum intensity projection (AIP or MIP) images derived from 4DCT images are often used as a reference image for target alignment when free breathing Cone-beam CT (FBCBCT) is used for positioning a moving target at treatment. This method can be highly accurate if the patient has stable respiratory motion. However, a patient’s breathing pattern often varies irregularly. The purpose of this study is to investigate the effect of irregular respiration on the positioning accuracy of a moving target with FBCBCT. Methods: Eight patients’ respiratory motion curves were selected to drive a Quasar phantom with embedded cubic andmore » spherical targets. A 4DCT of the moving phantom was acquired on a CT scanner (Philips Brilliance 16) equipped with a Varian RPM system. The phase binned 4DCT images and the corresponding MIP and AIP images were transferred into Eclipse for analysis. CBCTs of the phantom driven by the same breathing curves were acquired on a Varian TrueBeam and fused such that the zero positions of moving targets are the same on both CBCT and AIP images. The sphere and cube volumes and centrioid differences (alignment error) determined by MIP, AIP and FBCBCT images were compared. Results: Compared to the volume determined by FBCBCT, the volumes of cube and sphere in MIP images were 22.4%±8.8% and 34.2%±6.2% larger while the volumes in AIP images were 7.1%±6.2% and 2.7%±15.3% larger, respectively. The alignment errors for the cube and sphere with center-center matches between MIP and FBCBCT were 3.5±3.1mm and 3.2±2.3mm, and the alignment errors between AIP and FBCBCT were 2.1±2.6mm and 2.1±1.7mm, respectively. Conclusion: AIP images appear to be superior reference images than MIP images. However, irregular respiratory motions could compromise the positioning accuracy of a moving target if the target center-center match is used to align FBCBCT and AIP images.« less

  3. Error analysis for relay type satellite-aided search and rescue systems

    NASA Technical Reports Server (NTRS)

    Marini, J. W.

    1977-01-01

    An analysis was made of the errors in the determination of the position of an emergency transmitter in a satellite aided search and rescue system. The satellite was assumed to be at a height of 820 km in a near circular near polar orbit. Short data spans of four minutes or less were used. The error sources considered were measurement noise, transmitter frequency drift, ionospheric effects and error in the assumed height of the transmitter. The errors were calculated for several different transmitter positions, data rates and data spans. The only transmitter frequency used was 406 MHz, but the results can be scaled to different frequencies. In a typical case, in which four Doppler measurements were taken over a span of two minutes, the position error was about 1.2 km.

  4. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  5. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  6. Comparison of algorithms for automatic border detection of melanoma in dermoscopy images

    NASA Astrophysics Data System (ADS)

    Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert

    2016-09-01

    Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.

  7. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  8. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

  9. Flavour and identification threshold detection overview of Slovak adepts for certified testing.

    PubMed

    Vietoris, VladimIr; Barborova, Petra; Jancovicova, Jana; Eliasova, Lucia; Karvaj, Marian

    2016-07-01

    During certification process of sensory assessors of Slovak certification body we obtained results for basic taste thresholds and lifestyle habits. 500 adult people were screened during experiment with food industry background. For analysis of basic and non basic tastes, we used standardized procedure of ISO 8586-1:1993. In flavour test experiment, group of (26-35 y.o) produced the lowest error ratio (1.438), highest is (56+ y.o.) group with result (2.0). Average error value based on gender for women was (1.510) in comparison to men (1.477). People with allergies have the average error ratio (1.437) in comparison to people without allergies (1.511). Non-smokers produced less errors (1.484) against the smokers (1.576). Another flavour threshold identification test detected differences among age groups (by age are values increased). The highest number of errors made by men in metallic taste was (24%) the same as made by women (22%). Higher error ratio made by men occurred in salty taste (19%) against women (10%). Analysis detected some differences between allergic/non-allergic, smokers/non-smokers groups.

  10. Towards limb position invariant myoelectric pattern recognition using time-dependent spectral features.

    PubMed

    Khushaba, Rami N; Takruri, Maen; Miro, Jaime Valls; Kodagoda, Sarath

    2014-07-01

    Recent studies in Electromyogram (EMG) pattern recognition reveal a gap between research findings and a viable clinical implementation of myoelectric control strategies. One of the important factors contributing to the limited performance of such controllers in practice is the variation in the limb position associated with normal use as it results in different EMG patterns for the same movements when carried out at different positions. However, the end goal of the myoelectric control scheme is to allow amputees to control their prosthetics in an intuitive and accurate manner regardless of the limb position at which the movement is initiated. In an attempt to reduce the impact of limb position on EMG pattern recognition, this paper proposes a new feature extraction method that extracts a set of power spectrum characteristics directly from the time-domain. The end goal is to form a set of features invariant to limb position. Specifically, the proposed method estimates the spectral moments, spectral sparsity, spectral flux, irregularity factor, and signals power spectrum correlation. This is achieved through using Fourier transform properties to form invariants to amplification, translation and signal scaling, providing an efficient and accurate representation of the underlying EMG activity. Additionally, due to the inherent temporal structure of the EMG signal, the proposed method is applied on the global segments of EMG data as well as the sliced segments using multiple overlapped windows. The performance of the proposed features is tested on EMG data collected from eleven subjects, while implementing eight classes of movements, each at five different limb positions. Practical results indicate that the proposed feature set can achieve significant reduction in classification error rates, in comparison to other methods, with ≈8% error on average across all subjects and limb positions. A real-time implementation and demonstration is also provided and made available as a video supplement (see Appendix A). Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Gaze Tracking System for User Wearing Glasses

    PubMed Central

    Gwon, Su Yeong; Cho, Chul Woo; Lee, Hyeon Chang; Lee, Won Oh; Park, Kang Ryoung

    2014-01-01

    Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze tracker's lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the user's eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame. PMID:24473283

  12. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  13. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  14. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    PubMed

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2017-01-01

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  16. Contingent negative variation (CNV) associated with sensorimotor timing error correction.

    PubMed

    Jang, Joonyong; Jones, Myles; Milne, Elizabeth; Wilson, Daniel; Lee, Kwang-Hyuk

    2016-02-15

    Detection and subsequent correction of sensorimotor timing errors are fundamental to adaptive behavior. Using scalp-recorded event-related potentials (ERPs), we sought to find ERP components that are predictive of error correction performance during rhythmic movements. Healthy right-handed participants were asked to synchronize their finger taps to a regular tone sequence (every 600 ms), while EEG data were continuously recorded. Data from 15 participants were analyzed. Occasional irregularities were built into stimulus presentation timing: 90 ms before (advances: negative shift) or after (delays: positive shift) the expected time point. A tapping condition alternated with a listening condition in which identical stimulus sequence was presented but participants did not tap. Behavioral error correction was observed immediately following a shift, with a degree of over-correction with positive shifts. Our stimulus-locked ERP data analysis revealed, 1) increased auditory N1 amplitude for the positive shift condition and decreased auditory N1 modulation for the negative shift condition; and 2) a second enhanced negativity (N2) in the tapping positive condition, compared with the tapping negative condition. In response-locked epochs, we observed a CNV (contingent negative variation)-like negativity with earlier latency in the tapping negative condition compared with the tapping positive condition. This CNV-like negativity peaked at around the onset of subsequent tapping, with the earlier the peak, the better the error correction performance with the negative shifts while the later the peak, the better the error correction performance with the positive shifts. This study showed that the CNV-like negativity was associated with the error correction performance during our sensorimotor synchronization study. Auditory N1 and N2 were differentially involved in negative vs. positive error correction. However, we did not find evidence for their involvement in behavioral error correction. Overall, our study provides the basis from which further research on the role of the CNV in perceptual and motor timing can be developed. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Eigenvector method for umbrella sampling enables error analysis

    PubMed Central

    Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

    2016-01-01

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  18. Value stream mapping of the Pap test processing procedure: a lean approach to improve quality and efficiency.

    PubMed

    Michael, Claire W; Naik, Kalyani; McVicker, Michael

    2013-05-01

    We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.

  19. Finkelstein's test: a descriptive error that can produce a false positive.

    PubMed

    Elliott, B G

    1992-08-01

    Over the last three decades an error in performing Finkelstein's test has crept into the English literature in both text books and journals. This error can produce a false-positive, and if relied upon, a wrong diagnosis can be made, leading to inappropriate surgery.

  20. Coherent detection of position errors in inter-satellite laser communications

    NASA Astrophysics Data System (ADS)

    Xu, Nan; Liu, Liren; Liu, De'an; Sun, Jianfeng; Luan, Zhu

    2007-09-01

    Due to the improved receiver sensitivity and wavelength selectivity, coherent detection became an attractive alternative to direct detection in inter-satellite laser communications. A novel method to coherent detection of position errors information is proposed. Coherent communication system generally consists of receive telescope, local oscillator, optical hybrid, photoelectric detector and optical phase lock loop (OPLL). Based on the system composing, this method adds CCD and computer as position error detector. CCD captures interference pattern while detection of transmission data from the transmitter laser. After processed and analyzed by computer, target position information is obtained from characteristic parameter of the interference pattern. The position errors as the control signal of PAT subsystem drive the receiver telescope to keep tracking to the target. Theoretical deviation and analysis is presented. The application extends to coherent laser rang finder, in which object distance and position information can be obtained simultaneously.

  1. The Importance of Semi-Major Axis Knowledge in the Determination of Near-Circular Orbits

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Schiesser, Emil R.

    1998-01-01

    Modem orbit determination has mostly been accomplished using Cartesian coordinates. This usage has carried over in recent years to the use of GPS for satellite orbit determination. The unprecedented positioning accuracy of GPS has tended to focus attention more on the system's capability to locate the spacecraft's location at a particular epoch than on its accuracy in determination of the orbit, per se. As is well-known, the latter depends on a coordinated knowledge of position, velocity, and the correlation between their errors. Failure to determine a properly coordinated position/velocity state vector at a given epoch can lead to an epoch state that does not propagate well, and/or may not be usable for the execution of orbit adjustment maneuvers. For the quite common case of near-circular orbits, the degree to which position and velocity estimates are properly coordinated is largely captured by the error in semi-major axis (SMA) they jointly produce. Figure 1 depicts the relationships among radius error, speed error, and their correlation which exist for a typical low altitude Earth orbit. Two familiar consequences are the relationship Figure 1 shows are the following: (1) downrange position error grows at the per orbit rate of 3(pi) times the SMA error; (2) a velocity change imparted to the orbit will have an error of (pi) divided by the orbit period times the SMA error. A less familiar consequence occurs in the problem of initializing the covariance matrix for a sequential orbit determination filter. An initial covariance consistent with orbital dynamics should be used if the covariance is to propagate well. Properly accounting for the SMA error of the initial state in the construction of the initial covariance accomplishes half of this objective, by specifying the partition of the covariance corresponding to down-track position and radial velocity errors. The remainder of the in-plane covariance partition may be specified in terms of the flight path angle error of the initial state. Figure 2 illustrates the effect of properly and not properly initializing a covariance. This figure was produced by propagating the covariance shown on the plot, without process noise, in a circular low Earth orbit whose period is 5828.5 seconds. The upper subplot, in which the proper relationships among position, velocity, and their correlation has been used, shows overall error growth, in terms of the standard deviations of the inertial position coordinates, of about half of the lower subplot, whose initial covariance was based on other considerations.

  2. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters

    PubMed Central

    Park, Chan Gook

    2018-01-01

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539

  3. Image guidance during head-and-neck cancer radiation therapy: analysis of alignment trends with in-room cone-beam computed tomography scans.

    PubMed

    Zumsteg, Zachary; DeMarco, John; Lee, Steve P; Steinberg, Michael L; Lin, Chun Shu; McBride, William; Lin, Kevin; Wang, Pin-Chieh; Kupelian, Patrick; Lee, Percy

    2012-06-01

    On-board cone-beam computed tomography (CBCT) is currently available for alignment of patients with head-and-neck cancer before radiotherapy. However, daily CBCT is time intensive and increases the overall radiation dose. We assessed the feasibility of using the average couch shifts from the first several CBCTs to estimate and correct for the presumed systematic setup error. 56 patients with head-and-neck cancer who received daily CBCT before intensity-modulated radiation therapy had recorded shift values in the medial-lateral, superior-inferior, and anterior-posterior dimensions. The average displacements in each direction were calculated for each patient based on the first five or 10 CBCT shifts and were presumed to represent the systematic setup error. The residual error after this correction was determined by subtracting the calculated shifts from the shifts obtained using daily CBCT. The magnitude of the average daily residual three-dimensional (3D) error was 4.8 ± 1.4 mm, 3.9 ± 1.3 mm, and 3.7 ± 1.1 mm for uncorrected, five CBCT corrected, and 10 CBCT corrected protocols, respectively. With no image guidance, 40.8% of fractions would have been >5 mm off target. Using the first five CBCT shifts to correct subsequent fractions, this percentage decreased to 19.0% of all fractions delivered and decreased the percentage of patients with average daily 3D errors >5 mm from 35.7% to 14.3% vs. no image guidance. Using an average of the first 10 CBCT shifts did not significantly improve this outcome. Using the first five CBCT shift measurements as an estimation of the systematic setup error improves daily setup accuracy for a subset of patients with head-and-neck cancer receiving intensity-modulated radiation therapy and primarily benefited those with large 3D correction vectors (>5 mm). Daily CBCT is still necessary until methods are developed that more accurately determine which patients may benefit from alternative imaging strategies. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Stable estimate of primary OC/EC ratios in the EC tracer method

    NASA Astrophysics Data System (ADS)

    Chu, Shao-Hang

    In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.

  5. Long-term cliff retreat and erosion hotspots along the central shores of the Monterey Bay National Marine Sanctuary

    USGS Publications Warehouse

    Moore, Laura J.; Griggs, Gary B.

    2002-01-01

    Quantification of cliff retreat rates for the southern half of Santa Cruz County, CA, USA, located within the Monterey Bay National Marine Sanctuary, using the softcopy/geographic information system (GIS) methodology results in average cliff retreat rates of 7–15 cm/yr between 1953 and 1994. The coastal dunes at the southern end of Santa Cruz County migrate seaward and landward through time and display net accretion between 1953 and 1994, which is partially due to development. In addition, three critically eroding segments of coastline with high average erosion rates ranging from 20 to 63 cm/yr are identified as erosion ‘hotspots’. These locations include: Opal Cliffs, Depot Hill and Manresa. Although cliff retreat is episodic, spatially variable at the scale of meters, and the factors affecting cliff retreat vary along the Santa Cruz County coastline, there is a compensation between factors affecting retreat such that over the long-term the coastline maintains a relatively smooth configuration. The softcopy/GIS methodology significantly reduces errors inherent in the calculation of retreat rates in high-relief areas (e.g. erosion rates generated in this study are generally correct to within 10 cm) by removing errors due to relief displacement. Although the resulting root mean squared error for erosion rates is relatively small, simple projections of past erosion rates are inadequate to provide predictions of future cliff position. Improved predictions can be made for individual coastal segments by using a mean erosion rate and the standard deviation as guides to future cliff behavior in combination with an understanding of processes acting along the coastal segments in question. This methodology can be applied on any high-relief coast where retreat rates can be measured.

  6. Comparison of three optical tracking systems in a complex navigation scenario.

    PubMed

    Rudolph, Tobias; Ebert, Lars; Kowal, Jens

    2010-01-01

    Three-dimensional rotational X-ray imaging with the SIREMOBIL Iso-C3D (Siemens AG, Medical Solutions, Erlangen, Germany) has become a well-established intra-operative imaging modality. In combination with a tracking system, the Iso-C3D provides inherently registered image volumes ready for direct navigation. This is achieved by means of a pre-calibration procedure. The aim of this study was to investigate the influence of the tracking system used on the overall navigation accuracy of direct Iso-C3D navigation. Three models of tracking system were used in the study: Two Optotrak 3020s, a Polaris P4 and a Polaris Spectra system, with both Polaris systems being in the passive operation mode. The evaluation was carried out at two different sites using two Iso-C3D devices. To measure the navigation accuracy, a number of phantom experiments were conducted using an acrylic phantom equipped with titanium spheres. After scanning, a special pointer was used to pinpoint these markers. The difference between the digitized and navigated positions served as the accuracy measure. Up to 20 phantom scans were performed for each tracking system. The average accuracy measured was 0.86 mm and 0.96 mm for the two Optotrak 3020 systems, 1.15 mm for the Polaris P4, and 1.04 mm for the Polaris Spectra system. For the Polaris systems a higher maximal error was found, but all three systems yielded similar minimal errors. On average, all tracking systems used in this study could deliver similar navigation accuracy. The passive Polaris system showed – as expected – higher maximal errors; however, depending on the application constraints, this might be negligible.

  7. Development of optoelectronic monitoring system for ear arterial pressure waveforms

    NASA Astrophysics Data System (ADS)

    Sasayama, Satoshi; Imachi, Yu; Yagi, Tamotsu; Imachi, Kou; Ono, Toshirou; Man-i, Masando

    1994-02-01

    Invasive intra-arterial blood pressure measurement is the most accurate method but not practical if the subject is in motion. The apparatus developed by Wesseling et al., based on a volume-clamp method of Penaz (Finapres), is able to monitor continuous finger arterial pressure waveforms noninvasively. The limitation of Finapres is the difficulty in measuring the pressure of a subject during work that involves finger or arm action. Because the Finapres detector is attached to subject's finger, the measurements are affected by inertia of blood and hydrostatic effect cause by arm or finger motion. To overcome this problem, the authors made a detector that is attached to subject's ear and developed and optoelectronic monitoring systems for ear arterial pressure waveform (Earpres). An IR LEDs, photodiode, and air cuff comprised the detector. The detector was attached to a subject's ear, and the space adjusted between the air cuff and the rubber plate on which the LED and photodiode were positioned. To evaluate the accuracy of Earpres, the following tests were conducted with participation of 10 healthy male volunteers. The subjects rested for about five minutes, then performed standing and squatting exercises to provide wide ranges of systolic and diastolic arterial pressure. Intra- and inter-individual standard errors were calculated according to the method of van Egmond et al. As a result, average, the averages of intra-individual standard errors for earpres appeared small (3.7 and 2.7 mmHg for systolic and diastolic pressure respectively). The inter-individual standard errors for Earpres were about the same was Finapres for both systolic and diastolic pressure. The results showed the ear monitor was reliable in measuring arterial blood pressure waveforms and might be applicable to various fields such as sports medicine and ergonomics.

  8. Limitations of the planning organ at risk volume (PRV) concept.

    PubMed

    Stroom, Joep C; Heijmen, Ben J M

    2006-09-01

    Previously, we determined a planning target volume (PTV) margin recipe for geometrical errors in radiotherapy equal to M(T) = 2 Sigma + 0.7 sigma, with Sigma and sigma standard deviations describing systematic and random errors, respectively. In this paper, we investigated margins for organs at risk (OAR), yielding the so-called planning organ at risk volume (PRV). For critical organs with a maximum dose (D(max)) constraint, we calculated margins such that D(max) in the PRV is equal to the motion averaged D(max) in the (moving) clinical target volume (CTV). We studied margins for the spinal cord in 10 head-and-neck cases and 10 lung cases, each with two different clinical plans. For critical organs with a dose-volume constraint, we also investigated whether a margin recipe was feasible. For the 20 spinal cords considered, the average margin recipe found was: M(R) = 1.6 Sigma + 0.2 sigma with variations for systematic and random errors of 1.2 Sigma to 1.8 Sigma and -0.2 sigma to 0.6 sigma, respectively. The variations were due to differences in shape and position of the dose distributions with respect to the cords. The recipe also depended significantly on the volume definition of D(max). For critical organs with a dose-volume constraint, the PRV concept appears even less useful because a margin around, e.g., the rectum changes the volume in such a manner that dose-volume constraints stop making sense. The concept of PRV for planning of radiotherapy is of limited use. Therefore, alternative ways should be developed to include geometric uncertainties of OARs in radiotherapy planning.

  9. Preventable mix-ups of tuberculin and vaccines: reports to the US Vaccine and Drug Safety Reporting Systems.

    PubMed

    Chang, Soju; Pool, Vitali; O'Connell, Kathryn; Polder, Jacquelyn A; Iskander, John; Sweeney, Colleen; Ball, Robert; Braun, M Miles

    2008-01-01

    Errors involving the mix-up of tuberculin purified protein derivative (PPD) and vaccines leading to adverse reactions and unnecessary medical management have been reported previously. To determine the frequency of PPD-vaccine mix-ups reported to the US Vaccine Adverse Event Reporting System (VAERS) and the Adverse Event Reporting System (AERS), characterize adverse events and clusters involving mix-ups and describe reported contributory factors. We reviewed AERS reports from 1969 to 2005 and VAERS reports from 1990 to 2005. We defined a mix-up error event as an incident in which a single patient or a cluster of patients inadvertently received vaccine instead of a PPD product or received a PPD product instead of vaccine. We defined a cluster as inadvertent administration of PPD or vaccine products to more than one patient in the same facility within 1 month. Of 115 mix-up events identified, 101 involved inadvertent administration of vaccines instead of PPD. Product confusion involved PPD and multiple vaccines. The annual number of reported mix-ups increased from an average of one event per year in the early 1990s to an average of ten events per year in the early part of this decade. More than 240 adults and children were affected and the majority reported local injection site reactions. Four individuals were hospitalized (all recovered) after receiving the wrong products. Several patients were inappropriately started on tuberculosis prophylaxis as a result of a vaccine local reaction being interpreted as a positive tuberculin skin test. Reported potential contributory factors involved both system factors (e.g. similar packaging) and human errors (e.g. failure to read label before product administration). To prevent PPD-vaccine mix-ups, proper storage, handling and administration of vaccine and PPD products is necessary.

  10. Atmospheric mold spore counts in relation to meteorological parameters

    NASA Astrophysics Data System (ADS)

    Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.

    Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.

  11. SU-E-T-144: Effective Analysis of VMAT QA Generated Trajectory Log Files for Medical Accelerator Predictive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    Purpose: To determine the effectiveness of SPC analysis for a model predictive maintenance process that uses accelerator generated parameter and performance data contained in trajectory log files. Methods: Each trajectory file is decoded and a total of 131 axes positions are recorded (collimator jaw position, gantry angle, each MLC, etc.). This raw data is processed and either axis positions are extracted at critical points during the delivery or positional change over time is used to determine axis velocity. The focus of our analysis is the accuracy, reproducibility and fidelity of each axis. A reference positional trace of the gantry andmore » each MLC is used as a motion baseline for cross correlation (CC) analysis. A total of 494 parameters (482 MLC related) were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and parameter/system specifications. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: TG-142 and published analysis of VMAT delivery accuracy. Results: All errors introduced were detected. Synthetic positional errors of 2mm for collimator jaw and MLC carriage exceeded the chart limits. Gantry speed and each MLC speed are analyzed at two different points in the delivery. Simulated Gantry speed error (0.2 deg/sec) and MLC speed error (0.1 cm/sec) exceeded the speed chart limits. Gantry position error of 0.2 deg was detected by the CC maximum value charts. The MLC position error of 0.1 cm was detected by the CC maximum value location charts for every MLC. Conclusion: SPC I/MR evaluation of trajectory log file parameters may be effective in providing an early warning of performance degradation or component failure for medical accelerator systems.« less

  12. Computational aspects of geometric correction data generation in the LANDSAT-D imagery processing

    NASA Technical Reports Server (NTRS)

    Levine, I.

    1981-01-01

    A method is presented for systematic and geodetic correction data calculation. It is based on presentation of image distortions as a sum of nominal distortions and linear effects caused by variation of the spacecraft position and attitude variables from their nominals. The method may be used for both MSS and TM image data and it is incorporated into the processing by means of mostly offline calculations. Modeling shows that the maximal of the method are of the order of 5m at the worst point in a frame; the standard deviations of the average errors less than .8m.

  13. Local Setup Reproducibility of the Spinal Column When Using Intensity-Modulated Radiation Therapy for Craniospinal Irradiation With Patient in Supine Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoiber, Eva Maria, E-mail: eva.stoiber@med.uni-heidelberg.de; Department of Medical Physics, German Cancer Research Center, Heidelberg; Giske, Kristina

    Purpose: To evaluate local positioning errors of the lumbar spine during fractionated intensity-modulated radiotherapy of patients treated with craniospinal irradiation and to assess the impact of rotational error correction on these uncertainties for one patient setup correction strategy. Methods and Materials: 8 patients (6 adults, 2 children) treated with helical tomotherapy for craniospinal irradiation were retrospectively chosen for this analysis. Patients were immobilized with a deep-drawn Aquaplast head mask. Additionally to daily megavoltage control computed tomography scans of the skull, once-a-week positioning of the lumbar spine was assessed. Therefore, patient setup was corrected by a target point correction, derived frommore » a registration of the patient's skull. The residual positioning variations of the lumbar spine were evaluated applying a rigid-registration algorithm. The impact of different rotational error corrections was simulated. Results: After target point correction, residual local positioning errors of the lumbar spine varied considerably. Craniocaudal axis rotational error correction did not improve or deteriorate these translational errors, whereas simulation of a rotational error correction of the right-left and anterior-posterior axis increased these errors by a factor of 2 to 3. Conclusion: The patient fixation used allows for deformations between the patient's skull and spine. Therefore, for the setup correction strategy evaluated in this study, generous margins for the lumbar spinal target volume are needed to prevent a local geographic miss. With any applied correction strategy, it needs to be evaluated whether or not a rotational error correction is beneficial.« less

  14. Leader personality and crew effectiveness - A full-mission simulation experiment

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Foushee, H. Clayton

    1989-01-01

    A full-mission simulation research study was completed to assess the impact of individual personality on crew performance. Using a selection algorithm described by Chidester (1987), captains were classified as fitting one of three profiles along a battery of personality assessment scales. The performances of 23 crews led by captains fitting each profile were contrasted over a one and one-half day simulated trip. Crews led by captains fitting a positive Instrumental-Expressive profile (high achievement motivation and interpersonal skill) were consistently effective and made fewer errors. Crews led by captains fitting a Negative Expressive profile (below average achievement motivation, negative expressive style, such as complaining) were consistently less effective and made more errors. Crews led by captains fitting a Negative Instrumental profile (high levels of competitiveness, Verbal Aggressiveness, and Impatience and Irritability) were less effective on the first day but equal to the best on the second day. These results underscore the importance of stable personality variables as predictors of team coordination and performance.

  15. Leader personality and crew effectiveness: Factors influencing performance in full-mission air transport simulation

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Foushee, H. Clayton

    1989-01-01

    A full mission simulation research study was completed to assess the potential for selection along dimensions of personality. Using a selection algorithm described by Chidester (1987), captains were classified as fitting one of three profiles using a battery of personality assessment scales, and the performances of 23 crews led by captains fitting each profile were contrasted over a one and one-half day simulated trip. Crews led by captains fitting a Positive Instrumental Expressive profile (high achievement motivation and interpersonal skill) were consistently effective and made fewer errors. Crews led by captains fitting a Negative Communion profile (below average achievement motivation, negative expressive style, such as complaining) were consistently less effective and made more errors. Crews led by captains fitting a Negative Instrumental profile (high levels of Competitiveness, Verbal Aggressiveness, and Impatience and Irritability) were less effective on the first day but equal to the best on the second day. These results underscore the importance of stable personality variables as predictors of team coordination and performance.

  16. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  17. Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing.

    PubMed

    Wang, Lin; Yang, Liancheng; Dagnelie, Gislin

    2008-11-01

    To assess virtual maze navigation performance with simulated prosthetic vision in gaze-locked viewing, under the conditions of varying luminance contrast, background noise, and phosphene dropout. Four normally sighted subjects performed virtual maze navigation using simulated prosthetic vision in gaze-locked viewing, under five conditions of luminance contrast, background noise, and phosphene dropout. Navigation performance was measured as the time required to traverse a 10-room maze using a game controller, and the number of errors made during the trip. Navigation performance time (1) became stable after 6 to 10 trials, (2) remained similar on average at luminance contrast of 68% and 16% but had greater variation at 16%, (3) was not significantly affected by background noise, and (4) increased by 40% when 30% of phosphenes were removed. Navigation performance time and number of errors were significantly and positively correlated. Assuming that the simulated gaze-locked viewing conditions are extended to implant wearers, such prosthetic vision can be helpful for wayfinding in simple mobility tasks, though phosphene dropout may interfere with performance.

  18. SU-F-E-18: Training Monthly QA of Medical Accelerators: Illustrated Instructions for Self-Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Court, L; Wang, H; Aten, D

    Purpose: To develop and test clear illustrated instructions for training of monthly mechanical QA of medical linear accelerators. Methods: Illustrated instructions were created for monthly mechanical QA with tolerance tabulated, and underwent several steps of review and refinement. Testers with zero QA experience were then recruited from our radiotherapy department (1 student, 2 computational scientists and 8 dosimetrists). The following parameters were progressively de-calibrated on a Varian C-series linac: Group A = gantry angle, ceiling laser position, X1 jaw position, couch longitudinal position, physical graticule position (5 testers); Group B = Group A + wall laser position, couch lateral andmore » vertical position, collimator angle (3 testers); Group C = Group B + couch angle, wall laser angle, and optical distance indicator (3 testers). Testers were taught how to use the linac, and then used the instructions to try to identify these errors. A physicist observed each session, giving support on machine operation, as necessary. The instructions were further tested with groups of therapists, graduate students and physics residents at multiple institutions. We have also changed the language of the instructions to simulate using the instructions with non-English speakers. Results: Testers were able to follow the instructions. They determined gantry, collimator and couch angle errors within 0.4, 0.3, and 0.9degrees of the actual changed values, respectively. Laser positions were determined within 1mm, and jaw positions within 2mm. Couch position errors were determined within 2 and 3mm for lateral/longitudinal and vertical errors, respectively. Accessory positioning errors were determined within 1mm. ODI errors were determined within 2mm when comparing with distance sticks, and 6mm when using blocks, indicating that distance sticks should be the preferred approach for inexperienced staff. Conclusion: Inexperienced users were able to follow these instructions, and catch errors within the criteria suggested by AAPM TG142 for linacs used for IMRT.« less

  19. Position sense at the human elbow joint measured by arm matching or pointing.

    PubMed

    Tsay, Anthony; Allen, Trevor J; Proske, Uwe

    2016-10-01

    Position sense at the human elbow joint has traditionally been measured in blindfolded subjects using a forearm matching task. Here we compare position errors in a matching task with errors generated when the subject uses a pointer to indicate the position of a hidden arm. Evidence from muscle vibration during forearm matching supports a role for muscle spindles in position sense. We have recently shown using vibration, as well as muscle conditioning, which takes advantage of muscle's thixotropic property, that position errors generated in a forearm pointing task were not consistent with a role by muscle spindles. In the present study we have used a form of muscle conditioning, where elbow muscles are co-contracted at the test angle, to further explore differences in position sense measured by matching and pointing. For fourteen subjects, in a matching task where the reference arm had elbow flexor and extensor muscles contracted at the test angle and the indicator arm had its flexors conditioned at 90°, matching errors lay in the direction of flexion by 6.2°. After the same conditioning of the reference arm and extension conditioning of the indicator at 0°, matching errors lay in the direction of extension (5.7°). These errors were consistent with predictions based on a role by muscle spindles in determining forearm matching outcomes. In the pointing task subjects moved a pointer to align it with the perceived position of the hidden arm. After conditioning of the reference arm as before, pointing errors all lay in a more extended direction than the actual position of the arm by 2.9°-7.3°, a distribution not consistent with a role by muscle spindles. We propose that in pointing muscle spindles do not play the major role in signalling limb position that they do in matching, but that other sources of sensory input should be given consideration, including afferents from skin and joint.

  20. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

Top