Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2015-01-01
Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s
Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per
2017-06-01
Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.
Wei, Xiaobo; Liu, Mengjiao; Ding, Yun; Li, Qilin; Cheng, Changhai; Zong, Xian; Yin, Wenming; Chen, Jie; Gu, Wendong
2018-05-08
Breast-conserving surgery (BCS) plus postoperative radiotherapy has become the standard treatment for early-stage breast cancer. The aim of this study was to compare the setup accuracy of optical surface imaging by the Sentinel system with cone-beam computerized tomography (CBCT) imaging currently used in our clinic for patients received BCS. Two optical surface scans were acquired before and immediately after couch movement correction. The correlation between the setup errors as determined by the initial optical surface scan and CBCT was analyzed. The deviation of the second optical surface scan from the reference planning CT was considered an estimate for the residual errors for the new method for patient setup correction. The consequences in terms for necessary planning target volume (PTV) margins for treatment sessions without setup correction applied. We analyzed 145 scans in 27 patients treated for early stage breast cancer. The setup errors of skin marker based patient alignment by optical surface scan and CBCT were correlated, and the residual setup errors as determined by the optical surface scan after couch movement correction were reduced. Optical surface imaging provides a convenient method for improving the setup accuracy for breast cancer patient without unnecessary imaging dose.
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy.
Boswell, Sarah A; Jeraj, Robert; Ruchala, Kenneth J; Olivera, Gustavo H; Jaradat, Hazim A; James, Joshua A; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T Rock
2005-06-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoiber, Eva Maria, E-mail: eva.stoiber@med.uni-heidelberg.de; Department of Medical Physics, German Cancer Research Center, Heidelberg; Giske, Kristina
Purpose: To evaluate local positioning errors of the lumbar spine during fractionated intensity-modulated radiotherapy of patients treated with craniospinal irradiation and to assess the impact of rotational error correction on these uncertainties for one patient setup correction strategy. Methods and Materials: 8 patients (6 adults, 2 children) treated with helical tomotherapy for craniospinal irradiation were retrospectively chosen for this analysis. Patients were immobilized with a deep-drawn Aquaplast head mask. Additionally to daily megavoltage control computed tomography scans of the skull, once-a-week positioning of the lumbar spine was assessed. Therefore, patient setup was corrected by a target point correction, derived frommore » a registration of the patient's skull. The residual positioning variations of the lumbar spine were evaluated applying a rigid-registration algorithm. The impact of different rotational error corrections was simulated. Results: After target point correction, residual local positioning errors of the lumbar spine varied considerably. Craniocaudal axis rotational error correction did not improve or deteriorate these translational errors, whereas simulation of a rotational error correction of the right-left and anterior-posterior axis increased these errors by a factor of 2 to 3. Conclusion: The patient fixation used allows for deformations between the patient's skull and spine. Therefore, for the setup correction strategy evaluated in this study, generous margins for the lumbar spinal target volume are needed to prevent a local geographic miss. With any applied correction strategy, it needs to be evaluated whether or not a rotational error correction is beneficial.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, JY; Hong, DL
Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed.more » Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.« less
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imura, K; Fujibuchi, T; Hirata, H
Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to indicate efficient positional correction methods easily.« less
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
Analysis of Prostate Patient Setup and Tracking Data: Potential Intervention Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su Zhong, E-mail: zsu@floridaproton.org; Zhang Lisha; Murphy, Martin
Purpose: To evaluate the setup, interfraction, and intrafraction organ motion error distributions and simulate intrafraction intervention strategies for prostate radiotherapy. Methods and Materials: A total of 17 patients underwent treatment setup and were monitored using the Calypso system during radiotherapy. On average, the prostate tracking measurements were performed for 8 min/fraction for 28 fractions for each patient. For both patient couch shift data and intrafraction organ motion data, the systematic and random errors were obtained from the patient population. The planning target volume margins were calculated using the van Herk formula. Two intervention strategies were simulated using the tracking data:more » the deviation threshold and period. The related planning target volume margins, time costs, and prostate position 'fluctuation' were presented. Results: The required treatment margin for the left-right, superoinferior, and anteroposterior axes was 8.4, 10.8, and 14.7 mm for skin mark-only setup and 1.3, 2.3, and 2.8 mm using the on-line setup correction, respectively. Prostate motion significantly correlated among the superoinferior and anteroposterior directions. Of the 17 patients, 14 had prostate motion within 5 mm of the initial setup position for {>=}91.6% of the total tracking time. The treatment margin decreased to 1.1, 1.8, and 2.3 mm with a 3-mm threshold correction and to 0.5, 1.0, and 1.5 mm with an every-2-min correction in the left-right, superoinferior, and anteroposterior directions, respectively. The periodic corrections significantly increase the treatment time and increased the number of instances when the setup correction was made during transient excursions. Conclusions: The residual systematic and random error due to intrafraction prostate motion is small after on-line setup correction. Threshold-based and time-based intervention strategies both reduced the planning target volume margins. The time-based strategies increased the treatment time and the in-fraction position fluctuation.« less
Wang, He; Wang, Congjun; Tung, Samuel; Dimmitt, Andrew Wilson; Wong, Pei Fong; Edson, Mark A.; Garden, Adam S.; Rosenthal, David I.; Fuller, Clifton D.; Gunn, Gary B.; Takiar, Vinita; Wang, Xin A.; Luo, Dershan; Yang, James N.; Wong, Jennifer
2016-01-01
The purpose of this study was to investigate the setup and positioning uncertainty of a custom cushion/mask/bite‐block (CMB) immobilization system and determine PTV margin for image‐guided head and neck stereotactic ablative radiotherapy (HN‐SABR). We analyzed 105 treatment sessions among 21 patients treated with HN‐SABR for recurrent head and neck cancers using a custom CMB immobilization system. Initial patient setup was performed using the ExacTrac infrared (IR) tracking system and initial setup errors were based on comparison of ExacTrac IR tracking system to corrected online ExacTrac X‐rays images registered to treatment plans. Residual setup errors were determined using repeat verification X‐ray. The online ExacTrac corrections were compared to cone‐beam CT (CBCT) before treatment to assess agreement. Intrafractional positioning errors were determined using prebeam X‐rays. The systematic and random errors were analyzed. The initial translational setup errors were −0.8±1.3 mm, −0.8±1.6 mm, and 0.3±1.9 mm in AP, CC, and LR directions, respectively, with a three‐dimensional (3D) vector of 2.7±1.4 mm. The initial rotational errors were up to 2.4° if 6D couch is not available. CBCT agreed with ExacTrac X‐ray images to within 2 mm and 2.5°. The intrafractional uncertainties were 0.1±0.6 mm, 0.1±0.6 mm, and 0.2±0.5 mm in AP, CC, and LR directions, respectively, and 0.0∘±0.5°, 0.0∘±0.6°, and −0.1∘±0.4∘ in yaw, roll, and pitch direction, respectively. The translational vector was 0.9±0.6 mm. The calculated PTV margins mPTV(90,95) were within 1.6 mm when using image guidance for online setup correction. The use of image guidance for online setup correction, in combination with our customized CMB device, highly restricted target motion during treatments and provided robust immobilization to ensure minimum dose of 95% to target volume with 2.0 mm PTV margin for HN‐SABR. PACS number(s): 87.55.ne PMID:27167275
Feedforward operation of a lens setup for large defocus and astigmatism correction
NASA Astrophysics Data System (ADS)
Verstraete, Hans R. G. W.; Almasian, MItra; Pozzi, Paolo; Bilderbeek, Rolf; Kalkman, Jeroen; Faber, Dirk J.; Verhaegen, Michel
2016-04-01
In this manuscript, we present a lens setup for large defocus and astigmatism correction. A deformable defocus lens and two rotational cylindrical lenses are used to control the defocus and astigmatism. The setup is calibrated using a simple model that allows the calculation of the lens inputs so that a desired defocus and astigmatism are actuated on the eye. The setup is tested by determining the feedforward prediction error, imaging a resolution target, and removing introduced aberrations.
Set-up uncertainties: online correction with X-ray volume imaging.
Kataria, Tejinder; Abhishek, Ashu; Chadha, Pranav; Nandigam, Janardhan
2011-01-01
To determine interfractional three-dimensional set-up errors using X-ray volumetric imaging (XVI). Between December 2007 and August 2009, 125 patients were taken up for image-guided radiotherapy using online XVI. After matching of reference and acquired volume view images, set-up errors in three translation directions were recorded and corrected online before treatment each day. Mean displacements, population systematic (Σ), and random (σ) errors were calculated and analyzed using SPSS (v16) software. Optimum clinical target volume (CTV) to planning target volume (PTV) margin was calculated using Van Herk's (2.5Σ + 0.7 σ) and Stroom's (2Σ + 0.7 σ) formula. Patients were grouped in 4 cohorts, namely brain, head and neck, thorax, and abdomen-pelvis. The mean vector displacement recorded were 0.18 cm, 0.15 cm, 0.36 cm, and 0.35 cm for brain, head and neck, thorax, and abdomen-pelvis, respectively. Analysis of individual mean set-up errors revealed good agreement with the proposed 0.3 cm isotropic margins for brain and 0.5 cm isotropic margins for head-neck. Similarly, 0.5 cm circumferential and 1 cm craniocaudal proposed margins were in agreement with thorax and abdomen-pelvic cases. The calculated mean displacements were well within CTV-PTV margin estimates of Van Herk (90% population coverage to minimum 95% prescribed dose) and Stroom (99% target volume coverage by 95% prescribed dose). Employing these individualized margins in a particular cohort ensure comparable target coverage as described in literature, which is further improved if XVI-aided set-up error detection and correction is used before treatment.
Initial clinical experience with a video-based patient positioning system.
Johnson, L S; Milliken, B D; Hadley, S W; Pelizzari, C A; Haraf, D J; Chen, G T
1999-08-01
To report initial clinical experience with an interactive, video-based patient positioning system that is inexpensive, quick, accurate, and easy to use. System hardware includes two black-and-white CCD cameras, zoom lenses, and a PC equipped with a frame grabber. Custom software is used to acquire and archive video images, as well as to display real-time subtraction images revealing patient misalignment in multiple views. Two studies are described. In the first study, video is used to document the daily setup histories of 5 head and neck patients. Time-lapse cine loops are generated for each patient and used to diagnose and correct common setup errors. In the second study, 6 twice-daily (BID) head and neck patients are positioned according to the following protocol: at AM setups conventional treatment room lasers are used; at PM setups lasers are used initially and then video is used for 1-2 minutes to fine-tune the patient position. Lateral video images and lateral verification films are registered off-line to compare the distribution of setup errors per patient, with and without video assistance. In the first study, video images were used to determine the accuracy of our conventional head and neck setup technique, i.e., alignment of lightcast marks and surface anatomy to treatment room lasers and the light field. For this initial cohort of patients, errors ranged from sigma = 5 to 7 mm and were patient-specific. Time-lapse cine loops of the images revealed sources of the error, and as a result, our localization techniques and immobilization device were modified to improve setup accuracy. After the improvements, conventional setup errors were reduced to sigma = 3 to 5 mm. In the second study, when a stereo pair of live subtraction images were introduced to perform daily "on-line" setup correction, errors were reduced to sigma = 1 to 3 mm. Results depended on patient health and cooperation and the length of time spent fine-tuning the position. An interactive, video-based patient positioning system was shown to reduce setup errors to within 1 to 3 mm in head and neck patients, without a significant increase in overall treatment time or labor-intensive procedures. Unlike retrospective portal image analysis, use of two live-video images provides the therapists with immediate feedback and allows for true 3-D positioning and correction of out-of-plane rotation before radiation is delivered. With significant improvement in head and neck alignment and the elimination of setup errors greater than 3 to 5 mm, margins associated with treatment volumes potentially can be reduced, thereby decreasing normal tissue irradiation.
Richmond, N D; Pilling, K E; Peedell, C; Shakespeare, D; Walker, C P
2012-01-01
Stereotactic body radiotherapy for early stage non-small cell lung cancer is an emerging treatment option in the UK. Since relatively few high-dose ablative fractions are delivered to a small target volume, the consequences of a geometric miss are potentially severe. This paper presents the results of treatment delivery set-up data collected using Elekta Synergy (Elekta, Crawley, UK) cone-beam CT imaging for 17 patients immobilised using the Bodyfix system (Medical Intelligence, Schwabmuenchen, Germany). Images were acquired on the linear accelerator at initial patient treatment set-up, following any position correction adjustments, and post-treatment. These were matched to the localisation CT scan using the Elekta XVI software. In total, 71 fractions were analysed for patient set-up errors. The mean vector error at initial set-up was calculated as 5.3±2.7 mm, which was significantly reduced to 1.4±0.7 mm following image guided correction. Post-treatment the corresponding value was 2.1±1.2 mm. The use of the Bodyfix abdominal compression plate on 5 patients to reduce the range of tumour excursion during respiration produced mean longitudinal set-up corrections of −4.4±4.5 mm compared with −0.7±2.6 mm without compression for the remaining 12 patients. The use of abdominal compression led to a greater variation in set-up errors and a shift in the mean value. PMID:22665927
Zumsteg, Zachary; DeMarco, John; Lee, Steve P; Steinberg, Michael L; Lin, Chun Shu; McBride, William; Lin, Kevin; Wang, Pin-Chieh; Kupelian, Patrick; Lee, Percy
2012-06-01
On-board cone-beam computed tomography (CBCT) is currently available for alignment of patients with head-and-neck cancer before radiotherapy. However, daily CBCT is time intensive and increases the overall radiation dose. We assessed the feasibility of using the average couch shifts from the first several CBCTs to estimate and correct for the presumed systematic setup error. 56 patients with head-and-neck cancer who received daily CBCT before intensity-modulated radiation therapy had recorded shift values in the medial-lateral, superior-inferior, and anterior-posterior dimensions. The average displacements in each direction were calculated for each patient based on the first five or 10 CBCT shifts and were presumed to represent the systematic setup error. The residual error after this correction was determined by subtracting the calculated shifts from the shifts obtained using daily CBCT. The magnitude of the average daily residual three-dimensional (3D) error was 4.8 ± 1.4 mm, 3.9 ± 1.3 mm, and 3.7 ± 1.1 mm for uncorrected, five CBCT corrected, and 10 CBCT corrected protocols, respectively. With no image guidance, 40.8% of fractions would have been >5 mm off target. Using the first five CBCT shifts to correct subsequent fractions, this percentage decreased to 19.0% of all fractions delivered and decreased the percentage of patients with average daily 3D errors >5 mm from 35.7% to 14.3% vs. no image guidance. Using an average of the first 10 CBCT shifts did not significantly improve this outcome. Using the first five CBCT shift measurements as an estimation of the systematic setup error improves daily setup accuracy for a subset of patients with head-and-neck cancer receiving intensity-modulated radiation therapy and primarily benefited those with large 3D correction vectors (>5 mm). Daily CBCT is still necessary until methods are developed that more accurately determine which patients may benefit from alternative imaging strategies. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gangsaas, Anne, E-mail: a.gangsaas@erasmusmc.nl; Astreinidou, Eleftheria; Quint, Sandra
2013-10-01
Purpose: To investigate interfraction setup variations of the primary tumor, elective nodes, and vertebrae in laryngeal cancer patients and to validate protocols for cone beam computed tomography (CBCT)-guided correction. Methods and Materials: For 30 patients, CBCT-measured displacements in fractionated treatments were used to investigate population setup errors and to simulate residual setup errors for the no action level (NAL) offline protocol, the extended NAL (eNAL) protocol, and daily CBCT acquisition with online analysis and repositioning. Results: Without corrections, 12 of 26 patients treated with radical radiation therapy would have experienced a gradual change (time trend) in primary tumor setup ≥4more » mm in the craniocaudal (CC) direction during the fractionated treatment (11/12 in caudal direction, maximum 11 mm). Due to these trends, correction of primary tumor displacements with NAL resulted in large residual CC errors (required margin 6.7 mm). With the weekly correction vector adjustments in eNAL, the trends could be largely compensated (CC margin 3.5 mm). Correlation between movements of the primary and nodal clinical target volumes (CTVs) in the CC direction was poor (r{sup 2}=0.15). Therefore, even with online setup corrections of the primary CTV, the required CC margin for the nodal CTV was as large as 6.8 mm. Also for the vertebrae, large time trends were observed for some patients. Because of poor CC correlation (r{sup 2}=0.19) between displacements of the primary CTV and the vertebrae, even with daily online repositioning of the vertebrae, the required CC margin around the primary CTV was 6.9 mm. Conclusions: Laryngeal cancer patients showed substantial interfraction setup variations, including large time trends, and poor CC correlation between primary tumor displacements and motion of the nodes and vertebrae (internal tumor motion). These trends and nonrigid anatomy variations have to be considered in the choice of setup verification protocol and planning target volume margins. eNAL could largely compensate time trends with minor prolongation of fraction time.« less
Lamb, James M; Agazaryan, Nzhde; Low, Daniel A
2013-10-01
To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
TH-A-9A-03: Dosimetric Effect of Rotational Errors for Lung Stereotactic Body Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J; Kim, H; Park, J
2014-06-15
Purpose: To evaluate the dosimetric effects on target volume and organs at risk (OARs) due to roll rotational errors in treatment setup of stereotactic body radiation therapy (SBRT) for lung cancer. Methods: There were a total of 23 volumetric modulated arc therapy (VMAT) plans for lung SBRT examined in this retrospective study. Each CT image of VMAT plans was intentionally rotated by ±1°, ±2°, and ±3° to simulate roll rotational setup errors. The axis of rotation was set at the center of T-spine. The target volume and OARs in the rotated CT images were re-defined by deformable registration of originalmore » contours. The dose distributions on each set of rotated images were re-calculated to cover the planning target volume (PTV) with the prescription dose before and after the couch translational correction. The dose-volumetric changes of PTVs and spinal cords were analyzed. Results: The differences in D95% of PTVs by −3°, −2°, −1°, 1°, 2°, and 3° roll rotations before the couch translational correction were on average −11.3±11.4%, −5.46±7.24%, −1.11±1.38% −3.34±3.97%, −9.64±10.3%, and −16.3±14.7%, respectively. After the couch translational correction, those values were −0.195±0.544%, −0.159±0.391%, −0.188±0.262%, −0.310±0.270%, −0.407±0.331%, and −0.433±0.401%, respectively. The maximum dose difference of spinal cord among the 23 plans even after the couch translational correction was 25.9% at −3° rotation. Conclusions: Roll rotational setup errors in lung SBRT significantly influenced the coverage of target volume using VMAT technique. This could be in part compensated by the translational couch correction. However, in spite of the translational correction, the delivered doses to the spinal cord could be more than the calculated doses. Therefore if rotational setup errors exist during lung SBRT using VMAT technique, the rotational correction would rather be considered to prevent over-irradiation of normal tissues than the translational correction.« less
Giske, Kristina; Stoiber, Eva M; Schwarz, Michael; Stoll, Armin; Muenter, Marc W; Timke, Carmen; Roeder, Falk; Debus, Juergen; Huber, Peter E; Thieke, Christian; Bendl, Rolf
2011-06-01
To evaluate the local positioning uncertainties during fractionated radiotherapy of head-and-neck cancer patients immobilized using a custom-made fixation device and discuss the effect of possible patient correction strategies for these uncertainties. A total of 45 head-and-neck patients underwent regular control computed tomography scanning using an in-room computed tomography scanner. The local and global positioning variations of all patients were evaluated by applying a rigid registration algorithm. One bounding box around the complete target volume and nine local registration boxes containing relevant anatomic structures were introduced. The resulting uncertainties for a stereotactic setup and the deformations referenced to one anatomic local registration box were determined. Local deformations of the patients immobilized using our custom-made device were compared with previously published results. Several patient positioning correction strategies were simulated, and the residual local uncertainties were calculated. The patient anatomy in the stereotactic setup showed local systematic positioning deviations of 1-4 mm. The deformations referenced to a particular anatomic local registration box were similar to the reported deformations assessed from patients immobilized with commercially available Aquaplast masks. A global correction, including the rotational error compensation, decreased the remaining local translational errors. Depending on the chosen patient positioning strategy, the remaining local uncertainties varied considerably. Local deformations in head-and-neck patients occur even if an elaborate, custom-made patient fixation method is used. A rotational error correction decreased the required margins considerably. None of the considered correction strategies achieved perfect alignment. Therefore, weighting of anatomic subregions to obtain the optimal correction vector should be investigated in the future. Copyright © 2011 Elsevier Inc. All rights reserved.
SU-E-J-15: A Patient-Centered Scheme to Mitigate Impacts of Treatment Setup Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; Southern Medical University, Guangzhou; Tian, Z
2014-06-01
Purpose: Current Intensity Modulated Radiation Therapy (IMRT) is plan-centered. At each treatment fraction, we position the patient to match the setup in treatment plan. Inaccurate setup can compromise delivered dose distribution, and hence leading to suboptimal treatments. Moreover, current setup approach via couch shift under image guidance can correct translational errors, while rotational and deformation errors are hard to address. To overcome these problems, we propose in this abstract a patient-centered scheme to mitigate impacts of treatment setup errors. Methods: In the patient-centered scheme, we first position the patient on the couch approximately matching the planned-setup. Our Supercomputing Online Replanningmore » Environment (SCORE) is then employed to design an optimal treatment plan based on the daily patient geometry. It hence mitigates the impacts of treatment setup error and reduces the requirements on setup accuracy. We have conducted simulations studies in 10 head-and-neck (HN) patients to investigate the feasibility of this scheme. Rotational and deformation setup errors were simulated. Specifically, 1, 3, 5, 7 degrees of rotations were put on pitch, roll, and yaw directions; deformation errors were simulated by splitting neck movements into four basic types: rotation, lateral bending, flexion and extension. Setup variation ranges are based on observed numbers in previous studies. Dosimetric impacts of our scheme were evaluated on PTVs and OARs in comparison with original plan dose with original geometry and original plan recalculated dose with new setup geometries. Results: With conventional plan-centered approach, setup error could lead to significant PTV D99 decrease (−0.25∼+32.42%) and contralateral-parotid Dmean increase (−35.09∼+42.90%). The patientcentered approach is effective in mitigating such impacts to 0∼+0.20% and −0.03∼+5.01%, respectively. Computation time is <128 s. Conclusion: Patient-centered scheme is proposed to mitigate setup error impacts using replanning. Its superiority in terms of dosimetric impacts and feasibility has been shown through simulation studies on HN cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briscoe, M; Ploquin, N; Voroney, JP
2015-06-15
Purpose: To quantify the effect of patient rotation in stereotactic radiation therapy and establish a threshold where rotational patient set-up errors have a significant impact on target coverage. Methods: To simulate rotational patient set-up errors, a Matlab code was created to rotate the patient dose distribution around the treatment isocentre, located centrally in the lesion, while keeping the structure contours in the original locations on the CT and MRI. Rotations of 1°, 3°, and 5° for each of the pitch, roll, and yaw, as well as simultaneous rotations of 1°, 3°, and 5° around all three axes were applied tomore » two types of brain lesions: brain metastasis and acoustic neuroma. In order to analyze multiple tumour shapes, these plans included small spherical (metastasis), elliptical (acoustic neuroma), and large irregular (metastasis) tumour structures. Dose-volume histograms and planning target volumes were compared between the planned patient positions and those with simulated rotational set-up errors. The RTOG conformity index for patient rotation was also investigated. Results: Examining the tumour volumes that received 80% of the prescription dose in the planned and rotated patient positions showed decreases in prescription dose coverage of up to 2.3%. Conformity indices for treatments with simulated rotational errors showed decreases of up to 3% compared to the original plan. For irregular lesions, degradation of 1% of the target coverage can be seen for rotations as low as 3°. Conclusions: This data shows that for elliptical or spherical targets, rotational patient set-up errors less than 3° around any or all axes do not have a significant impact on the dose delivered to the target volume or the conformity index of the plan. However the same rotational errors would have an impact on plans for irregular tumours.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Jin, H; Ali, I
2014-06-01
Purpose: To determine dosimetric impact of positioning errors in the stereotactic hypo-fractionated treatment of intracranial lesions using 3Dtransaltional and 3D-rotational corrections (6D) frameless BrainLAB ExacTrac X-Ray system. Methods: 20 cranial lesions, treated in 3 or 5 fractions, were selected. An infrared (IR) optical positioning system was employed for initial patient setup followed by stereoscopic kV X-ray radiographs for position verification. 6D-translational and rotational shifts were determined to correct patient position. If these shifts were above tolerance (0.7 mm translational and 1° rotational), corrections were applied and another set of X-rays was taken to verify patient position. Dosimetric impact (D95, Dmin,more » Dmax, and Dmean of planning target volume (PTV) compared to original plans) of positioning errors for initial IR setup (XC: Xray Correction) and post-correction (XV: X-ray Verification) was determined in a treatment planning system using a method proposed by Yue et al. (Med. Phys. 33, 21-31 (2006)) with 3D-translational errors only and 6D-translational and rotational errors. Results: Absolute mean translational errors (±standard deviation) for total 92 fractions (XC/XV) were 0.79±0.88/0.19±0.15 mm (lateral), 1.66±1.71/0.18 ±0.16 mm (longitudinal), 1.95±1.18/0.15±0.14 mm (vertical) and rotational errors were 0.61±0.47/0.17±0.15° (pitch), 0.55±0.49/0.16±0.24° (roll), and 0.68±0.73/0.16±0.15° (yaw). The average changes (loss of coverage) in D95, Dmin, Dmax, and Dmean were 4.5±7.3/0.1±0.2%, 17.8±22.5/1.1±2.5%, 0.4±1.4/0.1±0.3%, and 0.9±1.7/0.0±0.1% using 6Dshifts and 3.1±5.5/0.0±0.1%, 14.2±20.3/0.8±1.7%, 0.0±1.2/0.1±0.3%, and 0.7±1.4/0.0±0.1% using 3D-translational shifts only. The setup corrections (XC-XV) improved the PTV coverage by 4.4±7.3% (D95) and 16.7±23.5% (Dmin) using 6D adjustment. Strong correlations were observed between translation errors and deviations in dose coverage for XC. Conclusion: The initial BrainLAB IR system based on rigidity of the mask-frame setup is not sufficient for accurate stereotactic positioning; however, with X-ray imageguidance sub-millimeter accuracy is achieved with negligible deviations in dose coverage. The angular corrections (mean angle summation=1.84°) are important and cause considerable deviations in dose coverage.« less
Prasad, Devleena; Das, Pinaki; Saha, Niladri S; Chatterjee, Sanjoy; Achari, Rimpa; Mallick, Indranil
2014-01-01
This aim of this study was to determine if a less resource-intensive and established offline correction protocol - the No Action Level (NAL) protocol was as effective as daily online corrections of setup deviations in curative high-dose radiotherapy of prostate cancer. A total of 683 daily megavoltage CT (MVCT) or kilovoltage CT (kvCBCT) images of 30 patients with localized prostate cancer treated with intensity modulated radiotherapy were evaluated. Daily image-guidance was performed and setup errors in three translational axes recorded. The NAL protocol was simulated by using the mean shift calculated from the first five fractions and implemented on all subsequent treatments. Using the imaging data from the remaining fractions, the daily residual error (RE) was determined. The proportion of fractions where the RE was greater than 3,5 and 7 mm was calculated, and also the actual PTV margin that would be required if the offline protocol was followed. Using the NAL protocol reduced the systematic but not the random errors. Corrections made using the NAL protocol resulted in small and acceptable RE in the mediolateral (ML) and superoinferior (SI) directions with 46/533 (8.1%) and 48/533 (5%) residual shifts above 5 mm. However; residual errors greater than 5mm in the anteroposterior (AP) direction remained in 181/533 (34%) of fractions. The PTV margins calculated based on residual errors were 5mm, 5mm and 13 mm in the ML, SI and AP directions respectively. Offline correction using the NAL protocol resulted in unacceptably high residual errors in the AP direction, due to random uncertainties of rectal and bladder filling. Daily online imaging and corrections remain the standard image guidance policy for highly conformal radiotherapy of prostate cancer.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
High performance interconnection between high data rate networks
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.
1992-01-01
The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.
Yan, M; Lovelock, D; Hunt, M; Mechalakos, J; Hu, Y; Pham, H; Jackson, A
2013-12-01
To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or -0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1-2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39-16.8) cGy, or 10.1 (0.8-32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%-9.06%) and 10.2% (0.7%-63.6%), respectively. Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%.
Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.
2013-01-01
Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%. PMID:24320510
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laaksomaa, Marko, E-mail: marko.laaksomaa@pshp.fi; Kapanen, Mika; Department of Medical Physics, Tampere University Hospital
We evaluated adequate setup margins for the radiotherapy (RT) of pelvic tumors based on overall position errors of bony landmarks. We also estimated the difference in setup accuracy between the male and female patients. Finally, we compared the patient rotation for 2 immobilization devices. The study cohort included consecutive 64 male and 64 female patients. Altogether, 1794 orthogonal setup images were analyzed. Observer-related deviation in image matching and the effect of patient rotation were explicitly determined. Overall systematic and random errors were calculated in 3 orthogonal directions. Anisotropic setup margins were evaluated based on residual errors after weekly image guidance.more » The van Herk formula was used to calculate the margins. Overall, 100 patients were immobilized with a house-made device. The patient rotation was compared against 28 patients immobilized with CIVCO's Kneefix and Feetfix. We found that the usually applied isotropic setup margin of 8 mm covered all the uncertainties related to patient setup for most RT treatments of the pelvis. However, margins of even 10.3 mm were needed for the female patients with very large pelvic target volumes centered either in the symphysis or in the sacrum containing both of these structures. This was because the effect of rotation (p ≤ 0.02) and the observer variation in image matching (p ≤ 0.04) were significantly larger for the female patients than for the male patients. Even with daily image guidance, the required margins remained larger for the women. Patient rotations were largest about the lateral axes. The difference between the required margins was only 1 mm for the 2 immobilization devices. The largest component of overall systematic position error came from patient rotation. This emphasizes the need for rotation correction. Overall, larger position errors and setup margins were observed for the female patients with pelvic cancer than for the male patients.« less
Is ExacTrac x-ray system an alternative to CBCT for positioning patients with head and neck cancers?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemente, Stefania; Chiumento, Costanza; Fiorentino, Alba
Purpose: To evaluate the usefulness of a six-degrees-of freedom (6D) correction using ExacTrac robotics system in patients with head-and-neck (HN) cancer receiving radiation therapy.Methods: Local setup accuracy was analyzed for 12 patients undergoing intensity-modulated radiation therapy (IMRT). Patient position was imaged daily upon two different protocols, cone-beam computed tomography (CBCT), and ExacTrac (ET) images correction. Setup data from either approach were compared in terms of both residual errors after correction and punctual displacement of selected regions of interest (Mandible, C2, and C6 vertebral bodies).Results: On average, both protocols achieved reasonably low residual errors after initial correction. The observed differences inmore » shift vectors between the two protocols showed that CBCT tends to weight more C2 and C6 at the expense of the mandible, while ET tends to average more differences among the different ROIs.Conclusions: CBCT, even without 6D correction capabilities, seems preferable to ET for better consistent alignment and the capability to see soft tissues. Therefore, in our experience, CBCT represents a benchmark for positioning head and neck cancer patients.« less
Isospin Breaking Corrections to the HVP with Domain Wall Fermions
NASA Astrophysics Data System (ADS)
Boyle, Peter; Guelpers, Vera; Harrison, James; Juettner, Andreas; Lehner, Christoph; Portelli, Antonin; Sachrajda, Christopher
2018-03-01
We present results for the QED and strong isospin breaking corrections to the hadronic vacuum polarization using Nf = 2 + 1 Domain Wall fermions. QED is included in an electro-quenched setup using two different methods, a stochastic and a perturbative approach. Results and statistical errors from both methods are directly compared with each other.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Jin, H; Hossain, S
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less
MEMS deformable mirror for wavefront correction of large telescopes
NASA Astrophysics Data System (ADS)
Manhart, Sigmund; Vdovin, Gleb; Collings, Neil; Sodnik, Zoran; Nikolov, Susanne; Hupfer, Werner
2017-11-01
A 50 mm diameter membrane mirror was designed and manufactured at TU Delft. It is made from bulk silicon by micromachining - a technology primarily used for micro-electromechanical systems (MEMS). The mirror unit is equipped with 39 actuator electrodes and can be electrostatically deformed to correct wavefront errors in optical imaging systems. Performance tests on the deformable mirror were carried out at Astrium GmbH using a breadboard setup with a wavefront sensor and a closed-loop control system. It was found that the deformable membrane mirror is well suited for correction of low order wavefront errors as they must be expected in lightweighted space telescopes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falco, Maria Daniela, E-mail: mdanielafalco@hotmail.co; Fontanarosa, Davide; Miceli, Roberto
2011-04-01
Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index hasmore » been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4{sup o}. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22{sup o}). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be accurate, and its registration matrix can be easily translated into the TPS and a low dose is delivered to the patient during image acquisition. These results can help in designing imaging protocols for offline evaluations.« less
Couch height–based patient setup for abdominal radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohira, Shingo; Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita; Ueda, Yoshihiro
2016-04-01
There are 2 methods commonly used for patient positioning in the anterior-posterior (A-P) direction: one is the skin mark patient setup method (SMPS) and the other is the couch height–based patient setup method (CHPS). This study compared the setup accuracy of these 2 methods for abdominal radiation therapy. The enrollment for this study comprised 23 patients with pancreatic cancer. For treatments (539 sessions), patients were set up by using isocenter skin marks and thereafter treatment couch was shifted so that the distance between the isocenter and the upper side of the treatment couch was equal to that indicated on themore » computed tomographic (CT) image. Setup deviation in the A-P direction for CHPS was measured by matching the spine of the digitally reconstructed radiograph (DRR) of a lateral beam at simulation with that of the corresponding time-integrated electronic portal image. For SMPS with no correction (SMPS/NC), setup deviation was calculated based on the couch-level difference between SMPS and CHPS. SMPS/NC was corrected using 2 off-line correction protocols: no action level (SMPS/NAL) and extended NAL (SMPS/eNAL) protocols. Margins to compensate for deviations were calculated using the Stroom formula. A-P deviation > 5 mm was observed in 17% of SMPS/NC, 4% of SMPS/NAL, and 4% of SMPS/eNAL sessions but only in one CHPS session. For SMPS/NC, 7 patients (30%) showed deviations at an increasing rate of > 0.1 mm/fraction, but for CHPS, no such trend was observed. The standard deviations (SDs) of systematic error (Σ) were 2.6, 1.4, 0.6, and 0.8 mm and the root mean squares of random error (σ) were 2.1, 2.6, 2.7, and 0.9 mm for SMPS/NC, SMPS/NAL, SMPS/eNAL, and CHPS, respectively. Margins to compensate for the deviations were wide for SMPS/NC (6.7 mm), smaller for SMPS/NAL (4.6 mm) and SMPS/eNAL (3.1 mm), and smallest for CHPS (2.2 mm). Achieving better setup with smaller margins, CHPS appears to be a reproducible method for abdominal patient setup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Topolnjak, Rajko; Borst, Gerben R.; Nijkamp, Jasper
Purpose: To quantify the geometrical uncertainties for the heart during radiotherapy treatment of left-sided breast cancer patients and to determine and validate planning organ at risk volume (PRV) margins. Methods and Materials: Twenty-two patients treated in supine position in 28 fractions with regularly acquired cone-beam computed tomography (CBCT) scans for offline setup correction were included. Retrospectively, the CBCT scans were reconstructed into 10-phase respiration correlated four-dimensional scans. The heart was registered in each breathing phase to the planning CT scan to establish the respiratory heart motion during the CBCT scan ({sigma}{sub resp}). The average of the respiratory motion was calculatedmore » as the heart displacement error for a fraction. Subsequently, the systematic ({Sigma}), random ({sigma}), and total random ({sigma}{sub tot}={radical}({sigma}{sup 2}+{sigma}{sub resp}{sup 2})) errors of the heart position were calculated. Based on the errors a PRV margin for the heart was calculated to ensure that the maximum heart dose (D{sub max}) is not underestimated in at least 90% of the cases (M{sub heart} = 1.3{Sigma}-0.5{sigma}{sub tot}). All analysis were performed in left-right (LR), craniocaudal (CC), and anteroposterior (AP) directions with respect to both online and offline bony anatomy setup corrections. The PRV margin was validated by accumulating the dose to the heart based on the heart registrations and comparing the planned PRV D{sub max} to the accumulated heart D{sub max}. Results: For online setup correction, the cardiac geometrical uncertainties and PRV margins were N-Ary-Summation = 2.2/3.2/2.1 mm, {sigma} = 2.1/2.9/1.4 mm, and M{sub heart} = 1.6/2.3/1.3 mm for LR/CC/AP, respectively. For offline setup correction these were N-Ary-Summation = 2.4/3.7/2.2 mm, {sigma} = 2.9/4.1/2.7 mm, and M{sub heart} = 1.6/2.1/1.4 mm. Cardiac motion induced by breathing was {sigma}{sub resp} = 1.4/2.9/1.4 mm for LR/CC/AP. The PRV D{sub max} underestimated the accumulated heart D{sub max} for 9.1% patients using online and 13.6% patients using offline bony anatomy setup correction, which validated that PRV margin size was adequate. Conclusion: Considerable cardiac position variability relative to the bony anatomy was observed in breast cancer patients. A PRV margin can be used during treatment planning to take these uncertainties into account.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapanen, Mika; Department of Medical Physics, Tampere University Hospital; Laaksomaa, Marko, E-mail: Marko.Laaksomaa@pshp.fi
2016-04-01
Residual position errors of the lymph node (LN) surrogates and humeral head (HH) were determined for 2 different arm fixation devices in radiotherapy (RT) of breast cancer: a standard wrist-hold (WH) and a house-made rod-hold (RH). The effect of arm position correction (APC) based on setup images was also investigated. A total of 113 consecutive patients with early-stage breast cancer with LN irradiation were retrospectively analyzed (53 and 60 using the WH and RH, respectively). Residual position errors of the LN surrogates (Th1-2 and clavicle) and the HH were investigated to compare the 2 fixation devices. The position errors andmore » setup margins were determined before and after the APC to investigate the efficacy of the APC in the treatment situation. A threshold of 5 mm was used for the residual errors of the clavicle and Th1-2 to perform the APC, and a threshold of 7 mm was used for the HH. The setup margins were calculated with the van Herk formula. Irradiated volumes of the HH were determined from RT treatment plans. With the WH and the RH, setup margins up to 8.1 and 6.7 mm should be used for the LN surrogates, and margins up to 4.6 and 3.6 mm should be used to spare the HH, respectively, without the APC. After the APC, the margins of the LN surrogates were equal to or less than 7.5/6.0 mm with the WH/RH, but margins up to 4.2/2.9 mm were required for the HH. The APC was needed at least once with both the devices for approximately 60% of the patients. With the RH, irradiated volume of the HH was approximately 2 times more than with the WH, without any dose constraints. Use of the RH together with the APC resulted in minimal residual position errors and setup margins for all the investigated bony landmarks. Based on the obtained results, we prefer the house-made RH. However, more attention should be given to minimize the irradiation of the HH with the RH than with the WH.« less
Hyde, Derek; Lochray, Fiona; Korol, Renee; Davidson, Melanie; Wong, C Shun; Ma, Lijun; Sahgal, Arjun
2012-03-01
To evaluate the residual setup error and intrafraction motion following kilovoltage cone-beam CT (CBCT) image guidance, for immobilized spine stereotactic body radiotherapy (SBRT) patients, with positioning corrected for in all six degrees of freedom. Analysis is based on 42 consecutive patients (48 thoracic and/or lumbar metastases) treated with a total of 106 fractions and 307 image registrations. Following initial setup, a CBCT was acquired for patient alignment and a pretreatment CBCT taken to verify shifts and determine the residual setup error, followed by a midtreatment and posttreatment CBCT image. For 13 single-fraction SBRT patients, two midtreatment CBCT images were obtained. Initially, a 1.5-mm and 1° tolerance was used to reposition the patient following couch shifts which was subsequently reduced to 1 mm and 1° degree after the first 10 patients. Small positioning errors after the initial CBCT setup were observed, with 90% occurring within 1 mm and 97% within 1°. In analyzing the impact of the time interval for verification imaging (10 ± 3 min) and subsequent image acquisitions (17 ± 4 min), the residual setup error was not significantly different (p > 0.05). A significant difference (p = 0.04) in the average three-dimensional intrafraction positional deviations favoring a more strict tolerance in translation (1 mm vs. 1.5 mm) was observed. The absolute intrafraction motion averaged over all patients and all directions along x, y, and z axis (± SD) were 0.7 ± 0.5 mm and 0.5 ± 0.4 mm for the 1.5 mm and 1 mm tolerance, respectively. Based on a 1-mm and 1° correction threshold, the target was localized to within 1.2 mm and 0.9° with 95% confidence. Near-rigid body immobilization, intrafraction CBCT imaging approximately every 15-20 min, and strict repositioning thresholds in six degrees of freedom yields minimal intrafraction motion allowing for safe spine SBRT delivery. Copyright © 2012 Elsevier Inc. All rights reserved.
Yao, Lihong; Zhu, Lihong; Wang, Junjie; Liu, Lu; Zhou, Shun; Jiang, ShuKun; Cao, Qianqian; Qu, Ang; Tian, Suqing
2015-04-26
To improve the delivery of radiotherapy in gynecologic malignancies and to minimize the irradiation of unaffected tissues by using daily kilovoltage cone beam computed tomography (kV-CBCT) to reduce setup errors. Thirteen patients with gynecologic cancers were treated with postoperative volumetric-modulated arc therapy (VMAT). All patients had a planning CT scan and daily CBCT during treatment. Automatic bone anatomy matching was used to determine initial inter-fraction positioning error. Positional correction on a six-degrees-of-freedom (6DoF) couch was followed by a second scan to calculate the residual inter-fraction error, and a post-treatment scan assessed intra-fraction motion. The margins of the planning target volume (MPTV) were calculated from these setup variations and the effect of margin size on normal tissue sparing was evaluated. In total, 573 CBCT scans were acquired. Mean absolute pre-/post-correction errors were obtained in all six planes. With 6DoF couch correction, the MPTV accounting for intra-fraction errors was reduced by 3.8-5.6 mm. This permitted a reduction in the maximum dose to the small intestine, bladder and femoral head (P=0.001, 0.035 and 0.032, respectively), the average dose to the rectum, small intestine, bladder and pelvic marrow (P=0.003, 0.000, 0.001 and 0.000, respectively) and markedly reduced irradiated normal tissue volumes. A 6DoF couch in combination with daily kV-CBCT can considerably improve positioning accuracy during VMAT treatment in gynecologic malignancies, reducing the MPTV. The reduced margin size permits improved normal tissue sparing and a smaller total irradiated volume.
NASA Astrophysics Data System (ADS)
Baek, Jong Geun; Jang, Hyun Soo; Oh, Young Kee; Lee, Hyun Jeong; Kim, Eng Chan
2015-07-01
The purpose of this study was to evaluate the setup uncertainties for single-fraction stereotactic radiosurgery (SF-SRS) based on clinical data with two different mask-creation methods using pretreatment con-beam computed tomography imaging guidance. Dedicated frameless fixation Brain- LAB masks for 23 patients were created as a routine mask (R-mask) making method, as explained in the BrainLAB's user manual. Alternative masks (A-masks), which were created by modifying the cover range of the R-masks for the patient's head, were used for 23 patients. The systematic errors including these for each mask and stereotactic target localizer were analyzed, and the errors were calculated as the means ± standard deviations (SD) from the left-right (LR), superior-inferior (SI), anterior-posterior (AP), and yaw setup corrections. In addition, the frequencies of the threedimensional (3D) vector length were analyzed. The values of the mean setup corrections for the R-mask in all directions were < 0.7 mm and < 0.1°, whereas the magnitudes of the SDs were relatively large compared to the mean values. In contrast, the means and SDs of the A-mask were smaller than those for the R-mask with the exception of the SD in the AP direction. The means and SDs in the yaw rotational direction for the R-mask and the A-mask system were comparable. 3D vector shifts of larger magnitude occurred more frequently for the R-mask than the A-mask. The setup uncertainties for each mask with the stereotactic localizing system had an asymmetric offset towards the positive AP direction. The A-mask-creation method, which is capable of covering the top of the patient's head, is superior to that for the R-mask, so the use of the A-mask is encouraged for SF-SRS to reduce the setup uncertainties. Moreover, careful mask-making is required to prevent possible setup uncertainties.
Inter- and Intrafraction Uncertainty in Prostate Bed Image-Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kitty; Palma, David A.; Department of Oncology, University of Western Ontario, London
2012-10-01
Purpose: The goals of this study were to measure inter- and intrafraction setup error and prostate bed motion (PBM) in patients undergoing post-prostatectomy image-guided radiotherapy (IGRT) and to propose appropriate population-based three-dimensional clinical target volume to planning target volume (CTV-PTV) margins in both non-IGRT and IGRT scenarios. Methods and Materials: In this prospective study, 14 patients underwent adjuvant or salvage radiotherapy to the prostate bed under image guidance using linac-based kilovoltage cone-beam CT (kV-CBCT). Inter- and intrafraction uncertainty/motion was assessed by offline analysis of three consecutive daily kV-CBCT images of each patient: (1) after initial setup to skin marks, (2)more » after correction for positional error/immediately before radiation treatment, and (3) immediately after treatment. Results: The magnitude of interfraction PBM was 2.1 mm, and intrafraction PBM was 0.4 mm. The maximum inter- and intrafraction prostate bed motion was primarily in the anterior-posterior direction. Margins of at least 3-5 mm with IGRT and 4-7 mm without IGRT (aligning to skin marks) will ensure 95% of the prescribed dose to the clinical target volume in 90% of patients. Conclusions: PBM is a predominant source of intrafraction error compared with setup error and has implications for appropriate PTV margins. Based on inter- and estimated intrafraction motion of the prostate bed using pre- and post-kV-CBCT images, CBCT IGRT to correct for day-to-day variances can potentially reduce CTV-PTV margins by 1-2 mm. CTV-PTV margins for prostate bed treatment in the IGRT and non-IGRT scenarios are proposed; however, in cases with more uncertainty of target delineation and image guidance accuracy, larger margins are recommended.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lilie L., E-mail: lin@uphs.upenn.edu; Hertan, Lauren; Rengan, Ramesh
2012-06-01
Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed.more » To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, S; Charpentier, P; Sayler, E
2015-06-15
Purpose Isocenter shifts and rotations to correct patient setup errors and organ motion cannot remedy some shape changes of large targets. We are investigating new methods in quantification of target deformation for realtime IGRT of breast and chest wall cancer. Methods Ninety-five patients of breast or chest wall cancer were accrued in an IRB-approved clinical trial of IGRT using 3D surface images acquired at daily setup and beam-on time via an in-room camera. Shifts and rotations relating to the planned reference surface were determined using iterative-closest-point alignment. Local surface displacements and target deformation are measured via a ray-surface intersection andmore » principal component analysis (PCA) of external surface, respectively. Isocenter shift, upper-abdominal displacement, and vectors of the surface projected onto the two principal components, PC1 and PC2, were evaluated for sensitivity and accuracy in detection of target deformation. Setup errors for some deformed targets were estimated by superlatively registering target volume, inner surface, or external surface in weekly CBCT or these outlines on weekly EPI. Results Setup difference according to the inner-surface, external surface, or target volume could be 1.5 cm. Video surface-guided setup agreed with EPI results to within < 0.5 cm while CBCT results were sometimes (∼20%) different from that of EPI (>0.5 cm) due to target deformation for some large breasts and some chest walls undergoing deep-breath-hold irradiation. Square root of PC1 and PC2 is very sensitive to external surface deformation and irregular breathing. Conclusion PCA of external surfaces is quick and simple way to detect target deformation in IGRT of breast and chest wall cancer. Setup corrections based on the target volume, inner surface, and external surface could be significant different. Thus, checking of target shape changes is essential for accurate image-guided patient setup and motion tracking of large deformable targets. NIH grant for the first author as cionsultant and the last author as the PI.« less
Sturgeon, Jared D; Cox, John A; Mayo, Lauren L; Gunn, G Brandon; Zhang, Lifei; Balter, Peter A; Dong, Lei; Awan, Musaddiq; Kocak-Uzel, Esengul; Mohamed, Abdallah Sherif Radwan; Rosenthal, David I; Fuller, Clifton David
2015-10-01
Digitally reconstructed radiographs (DRRs) are routinely used as an a priori reference for setup correction in radiotherapy. The spatial resolution of DRRs may be improved to reduce setup error in fractionated radiotherapy treatment protocols. The influence of finer CT slice thickness reconstruction (STR) and resultant increased resolution DRRs on physician setup accuracy was prospectively evaluated. Four head and neck patient CT-simulation images were acquired and used to create DRR cohorts by varying STRs at 0.5, 1, 2, 2.5, and 3 mm. DRRs were displaced relative to a fixed isocenter using 0-5 mm random shifts in the three cardinal axes. Physician observers reviewed DRRs of varying STRs and displacements and then aligned reference and test DRRs replicating daily KV imaging workflow. A total of 1,064 images were reviewed by four blinded physicians. Observer errors were analyzed using nonparametric statistics (Friedman's test) to determine whether STR cohorts had detectably different displacement profiles. Post hoc bootstrap resampling was applied to evaluate potential generalizability. The observer-based trial revealed a statistically significant difference between cohort means for observer displacement vector error ([Formula: see text]) and for [Formula: see text]-axis [Formula: see text]. Bootstrap analysis suggests a 15% gain in isocenter translational setup error with reduction of STR from 3 mm to [Formula: see text]2 mm, though interobserver variance was a larger feature than STR-associated measurement variance. Higher resolution DRRs generated using finer CT scan STR resulted in improved observer performance at shift detection and could decrease operator-dependent geometric error. Ideally, CT STRs [Formula: see text]2 mm should be utilized for DRR generation in the head and neck.
2012-01-01
Background To investigate geometric and dosimetric accuracy of frame-less image-guided radiosurgery (IG-RS) for brain metastases. Methods and materials Single fraction IG-RS was practiced in 72 patients with 98 brain metastases. Patient positioning and immobilization used either double- (n = 71) or single-layer (n = 27) thermoplastic masks. Pre-treatment set-up errors (n = 98) were evaluated with cone-beam CT (CBCT) based image-guidance (IG) and were corrected in six degrees of freedom without an action level. CBCT imaging after treatment measured intra-fractional errors (n = 64). Pre- and post-treatment errors were simulated in the treatment planning system and target coverage and dose conformity were evaluated. Three scenarios of 0 mm, 1 mm and 2 mm GTV-to-PTV (gross tumor volume, planning target volume) safety margins (SM) were simulated. Results Errors prior to IG were 3.9 mm ± 1.7 mm (3D vector) and the maximum rotational error was 1.7° ± 0.8° on average. The post-treatment 3D error was 0.9 mm ± 0.6 mm. No differences between double- and single-layer masks were observed. Intra-fractional errors were significantly correlated with the total treatment time with 0.7mm±0.5mm and 1.2mm±0.7mm for treatment times ≤23 minutes and >23 minutes (p<0.01), respectively. Simulation of RS without image-guidance reduced target coverage and conformity to 75% ± 19% and 60% ± 25% of planned values. Each 3D set-up error of 1 mm decreased target coverage and dose conformity by 6% and 10% on average, respectively, with a large inter-patient variability. Pre-treatment correction of translations only but not rotations did not affect target coverage and conformity. Post-treatment errors reduced target coverage by >5% in 14% of the patients. A 1 mm safety margin fully compensated intra-fractional patient motion. Conclusions IG-RS with online correction of translational errors achieves high geometric and dosimetric accuracy. Intra-fractional errors decrease target coverage and conformity unless compensated with appropriate safety margins. PMID:22531060
Comparison of 2c- and 3cLIF droplet temperature imaging
NASA Astrophysics Data System (ADS)
Palmer, Johannes; Reddemann, Manuel A.; Kirsch, Valeri; Kneer, Reinhold
2018-06-01
This work presents "pulsed 2D-3cLIF-EET" as a measurement setup for micro-droplet internal temperature imaging. The setup relies on a third color channel that allows correcting spatially changing energy transfer rates between the two applied fluorescent dyes. First measurement results are compared with results of two slightly different versions of the recent "pulsed 2D-2cLIF-EET" method. Results reveal a higher temperature measurement accuracy of the recent 2cLIF setup. Average droplet temperature is determined by the 2cLIF setup with an uncertainty of less than 1 K and a spatial deviation of about 3.7 K. The new 3cLIF approach would become competitive, if the existing droplet size dependency is anticipated by an additional calibration and if the processing algorithm includes spatial measurement errors more appropriately.
Reduction of prostate intrafraction motion using gas-release rectal balloons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su Zhong; Zhao Tianyu; Li Zuofeng
2012-10-15
Purpose: To analyze prostate intrafraction motion using both non-gas-release (NGR) and gas-release (GR) rectal balloons and to evaluate the ability of GR rectal balloons to reduce prostate intrafraction motion. Methods: Twenty-nine patients with NGR rectal balloons and 29 patients with GR balloons were randomly selected from prostate patients treated with proton therapy at University of Florida Proton Therapy Institute (Jacksonville, FL). Their pretreatment and post-treatment orthogonal radiographs were analyzed, and both pretreatment setup residual error and intrafraction-motion data were obtained. Population histograms of intrafraction motion were plotted for both types of balloons. Population planning target-volume (PTV) margins were calculated withmore » the van Herk formula of 2.5{Sigma}+ 0.7{sigma} to account for setup residual errors and intrafraction motion errors. Results: Pretreatment and post-treatment radiographs indicated that the use of gas-release rectal balloons reduced prostate intrafraction motion along superior-inferior (SI) and anterior-posterior (AP) directions. Similar patient setup residual errors were exhibited for both types of balloons. Gas-release rectal balloons resulted in PTV margin reductions from 3.9 to 2.8 mm in the SI direction, 3.1 to 1.8 mm in the AP direction, and an increase from 1.9 to 2.1 mm in the left-right direction. Conclusions: Prostate intrafraction motion is an important uncertainty source in radiotherapy after image-guided patient setup with online corrections. Compared to non-gas-release rectal balloons, gas-release balloons can reduce prostate intrafraction motion in the SI and AP directions caused by gas buildup.« less
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, Ozer, E-mail: oalgan@ouhsc.edu; Jamgade, Ambarish; Ali, Imad
2012-01-01
The purpose of this study was to evaluate the impact of daily setup error and interfraction organ motion on the overall dosimetric radiation treatment plans. Twelve patients undergoing definitive intensity-modulated radiation therapy (IMRT) treatments for prostate cancer were evaluated in this institutional review board-approved study. Each patient had fiducial markers placed into the prostate gland before treatment planning computed tomography scan. IMRT plans were generated using the Eclipse treatment planning system. Each patient was treated to a dose of 8100 cGy given in 45 fractions. In this study, we retrospectively created a plan for each treatment day that had amore » shift available. To calculate the dose, the patient would have received under this plan, we mathematically 'negated' the shift by moving the isocenter in the exact opposite direction of the shift. The individualized daily plans were combined to generate an overall plan sum. The dose distributions from these plans were compared with the treatment plans that were used to treat the patients. Three-hundred ninety daily shifts were negated and their corresponding plans evaluated. The mean isocenter shift based on the location of the fiducial markers was 3.3 {+-} 6.5 mm to the right, 1.6 {+-} 5.1 mm posteriorly, and 1.0 {+-} 5.0 mm along the caudal direction. The mean D95 doses for the prostate gland when setup error was corrected and uncorrected were 8228 and 7844 cGy (p < 0.002), respectively, and for the planning target volume (PTV8100) was 8089 and 7303 cGy (p < 0.001), respectively. The mean V95 values when patient setup was corrected and uncorrected were 99.9% and 87.3%, respectively, for the PTV8100 volume (p < 0.0001). At an individual patient level, the difference in the D95 value for the prostate volume could be >1200 cGy and for the PTV8100 could approach almost 2000 cGy when comparing corrected against uncorrected plans. There was no statistically significant difference in the D35 parameter for the surrounding normal tissue except for the dose received by the penile bulb and the right hip. Our dosimetric evaluation suggests significant underdosing with inaccurate target localization and emphasizes the importance of accurate patient setup and target localization. Further studies are needed to evaluate the impact of intrafraction organ motion, rotation, and deformation on doses delivered to target volumes.« less
SU-F-J-21: Clinical Evaluation of Surface Scanning Systems in Different Treatment Locations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, T; Karger, C; Stefanowicz, S
Purpose: To reduce imaging dose in fractionated IGRT, the ability of optical surface imaging systems (OSIS) to detect setup errors was tested. Therefore, clinical studies to evaluate for different treatment locations setup corrections derived by OSIS in comparison with x-ray image guidance in fractionated radiation therapy was performed. Methods: The setup correction accuracy of an OSIS system (AlignRT, VisionRT, London, UK) will be analysed for the 4 tumour locations Pelvis, Upper Abdomen, Thorax and Breast, 20 patients for each location in comparison to a different system (Sentinel, C-RAD, SE). For each patient, the setup corrections of the cone-beam computed tomographymore » (CBCT) of an Elekta Versa HD linear accelerator (Elekta, Crawley, UK) is considered as gold-standard and then compared with those of the OSIS for the first ten fractions retrospectively. There were no clinical decisions made based on the surrogate system. For the OSIS, the reference surface is highly important as it represents the actual ground truth. It can be obtained either with the system itself or the surface structure delineated in the planning CT can be imported via DICOM interface. In this paper, the first results for the treatment region thorax are presented. The reference image modalities were compared. Results: Table 1 displays the difference between the setup corrections obtained with OSIS and CBCT in lateral (LAT), longitudinal (LNG) and vertical (VRT) direction for the DICOM reference image. While the median deviations are within a few millimeters, some outliers showed large deviations. Generally, the mean deviation as well as the spread was smallest in lateral and largest in vertical direction. Conclusion: Although the system allows fast, simple and non-invasive determination of setup corrections, it should be evaluated treatment region dependant. Therefore, the study is ongoing. The application of OSIS may help to reduce the imaging dose for the patient. We gratefully acknowledge the support by our colleagues from the Radiological University Clinic Heidelberg, where the study was performed. This work was funded by the Federal Ministry of Education and Research (BMBF) Germany, grant number 01IB13001B.« less
Hoffmans-Holtzer, Nienke A; Hoffmans, Daan; Dahele, Max; Slotman, Ben J; Verbakel, Wilko F A R
2015-03-01
The purpose of this work was to investigate whether adapting gantry and collimator angles can compensate for roll and pitch setup errors during volumetric modulated arc therapy (VMAT) delivery. Previously delivered clinical plans for locally advanced head-and-neck (H&N) cancer (n = 5), localized prostate cancer (n = 2), and whole brain with simultaneous integrated boost to 5 metastases (WB + 5M, n = 1) were used for this study. Known rigid rotations were introduced in the planning CT scans. To compensate for these, in-house software was used to adapt gantry and collimator angles in the plan. Doses to planning target volumes (PTV) and critical organs at risk (OAR) were calculated with and without compensation and compared with the original clinical plan. Measurements in the sagittal plane in a polystyrene phantom using radiochromic film were compared by gamma (γ) evaluation for 2 H&N cancer patients. For H&N plans, the introduction of 2°-roll and 3°-pitch rotations reduced mean PTV coverage from 98.7 to 96.3%. This improved to 98.1% with gantry and collimator compensation. For prostate plans respective figures were 98.4, 97.5, and 98.4%. For WB + 5M, compensation worked less well, especially for smaller volumes and volumes farther from the isocenter. Mean comparative γ evaluation (3%, 1 mm) between original and pitched plans resulted in 86% γ < 1. The corrected plan restored the mean comparison to 96% γ < 1. Preliminary data suggest that adapting gantry and collimator angles is a promising way to correct roll and pitch set-up errors of < 3° during VMAT for H&N and prostate cancer.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
Accounting for optical errors in microtensiometry.
Hinton, Zachary R; Alvarez, Nicolas J
2018-09-15
Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.
Tabernero, Juan; Vazquez, Daniel; Seidemann, Anne; Uttenweiler, Dietmar; Schaeffel, Frank
2009-08-01
The recent observation that central refractive development might be controlled by the refractive errors in the periphery, also in primates, revived the interest in the peripheral optics of the eye. We optimized an eccentric photorefractor to measure the peripheral refractive error in the vertical pupil meridian over the horizontal visual field (from -45 degrees to 45 degrees ), with and without myopic spectacle correction. Furthermore, a newly designed radial refractive gradient lens (RRG lens) that induces increasing myopia in all radial directions from the center was tested. We found that for the geometry of our measurement setup conventional spectacles induced significant relative hyperopia in the periphery, although its magnitude varied greatly among different spectacle designs and subjects. In contrast, the newly designed RRG lens induced relative peripheral myopia. These results are of interest to analyze the effect that different optical corrections might have on the emmetropization process.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
2013-01-01
Background The purpose of this study was to evaluate the impact of Cone Beam CT (CBCT) based setup correction on total dose distributions in fractionated frameless stereotactic radiation therapy of intracranial lesions. Methods Ten patients with intracranial lesions treated with 30 Gy in 6 fractions were included in this study. Treatment planning was performed with Oncentra® for a SynergyS® (Elekta Ltd, Crawley, UK) linear accelerator with XVI® Cone Beam CT, and HexaPOD™ couch top. Patients were immobilized by thermoplastic masks (BrainLab, Reuther). After initial patient setup with respect to lasers, a CBCT study was acquired and registered to the planning CT (PL-CT) study. Patient positioning was corrected according to the correction values (translational, rotational) calculated by the XVI® system. Afterwards a second CBCT study was acquired and registered to the PL-CT to confirm the accuracy of the corrections. An in-house developed software was used for rigid transformation of the PL-CT to the CBCT geometry, and dose calculations for each fraction were performed on the transformed CT. The total dose distribution was achieved by back-transformation and summation of the dose distributions of each fraction. Dose distributions based on PL-CT, CBCT (laser set-up), and final CBCT were compared to assess the influence of setup inaccuracies. Results The mean displacement vector, calculated over all treatments, was reduced from (4.3 ± 1.3) mm for laser based setup to (0.5 ± 0.2) mm if CBCT corrections were applied. The mean rotational errors around the medial-lateral, superior-inferior, anterior-posterior axis were reduced from (−0.1 ± 1.4)°, (0.1 ± 1.2)° and (−0.2 ± 1.0)°, to (0.04 ± 0.4)°, (0.01 ± 0.4)° and (0.02 ± 0.3)°. As a consequence the mean deviation between planned and delivered dose in the planning target volume (PTV) could be reduced from 12.3% to 0.4% for D95 and from 5.9% to 0.1% for Dav. Maximum deviation was reduced from 31.8% to 0.8% for D95, and from 20.4% to 0.1% for Dav. Conclusion Real dose distributions differ substantially from planned dose distributions, if setup is performed according to lasers only. Thermoplasic masks combined with a daily CBCT enabled a sufficient accuracy in dose distribution. PMID:23800172
Lei, Yu; Wu, Qiuwen
2010-04-21
Offline adaptive radiotherapy (ART) has been used to effectively correct and compensate for prostate motion and reduce the required margin. The efficacy depends on the characteristics of the patient setup error and interfraction motion through the whole treatment; specifically, systematic errors are corrected and random errors are compensated for through the margins. In online image-guided radiation therapy (IGRT) of prostate cancer, the translational setup error and inter-fractional prostate motion are corrected through pre-treatment imaging and couch correction at each fraction. However, the rotation and deformation of the target are not corrected and only accounted for with margins in treatment planning. The purpose of this study was to investigate whether the offline ART strategy is necessary for an online IGRT protocol and to evaluate the benefit of the hybrid strategy. First, to investigate the rationale of the hybrid strategy, 592 cone-beam-computed tomography (CBCT) images taken before and after each fraction for an online IGRT protocol from 16 patients were analyzed. Specifically, the characteristics of prostate rotation were analyzed. It was found that there exist systematic inter-fractional prostate rotations, and they are patient specific. These rotations, if not corrected, are persistent through the treatment fraction, and rotations detected in early fractions are representative of those in later fractions. These findings suggest that the offline adaptive replanning strategy is beneficial to the online IGRT protocol with further margin reductions. Second, to quantitatively evaluate the benefit of the hybrid strategy, 412 repeated helical CT scans from 25 patients during the course of treatment were included in the replanning study. Both low-risk patients (LRP, clinical target volume, CTV = prostate) and intermediate-risk patients (IRP, CTV = prostate + seminal vesicles) were included in the simulation. The contours of prostate and seminal vesicles were delineated on each CT. The benefit of margin reduction to compensate for both rotation and deformation in the hybrid strategy was evaluated geometrically. With the hybrid strategy, the planning margins can be reduced by 1.4 mm for LRP, and 2.0 mm for IRP, compared with the standard online IGRT only, to maintain the same 99% target volume coverage. The average relative reduction in planning target volume (PTV) based on the internal target volume (ITV) from PTV based on CTV is 19% for LRP, and 27% for IRP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penninkhof, Joan, E-mail: j.penninkhof@erasmusmc.nl; Quint, Sandra; Baaijens, Margreet
Purpose: To describe the practical use of the extended No Action Level (eNAL) setup correction protocol for breast cancer patients with surgical clips and evaluate its impact on the setup accuracy of both tumor bed and whole breast during simultaneously integrated boost treatments. Methods and Materials: For 80 patients, two orthogonal planar kilovoltage images and one megavoltage image (for the mediolateral beam) were acquired per fraction throughout the radiotherapy course. For setup correction, the eNAL protocol was applied, based on registration of surgical clips in the lumpectomy cavity. Differences with respect to application of a No Action Level (NAL) protocolmore » or no protocol were quantified for tumor bed and whole breast. The correlation between clip migration during the fractionated treatment and either the method of surgery or the time elapsed from last surgery was investigated. Results: The distance of the clips to their center of mass (COM), averaged over all clips and patients, was reduced by 0.9 {+-} 1.2 mm (mean {+-} 1 SD). Clip migration was similar between the group of patients starting treatment within 100 days after surgery (median, 53 days) and the group starting afterward (median, 163 days) (p = 0.20). Clip migration after conventional breast surgery (closing the breast superficially) or after lumpectomy with partial breast reconstructive techniques (sutured cavity). was not significantly different either (p = 0.22). Application of eNAL on clips resulted in residual systematic errors for the clips' COM of less than 1 mm in each direction, whereas the setup of the breast was within about 2 mm of accuracy. Conclusions: Surgical clips can be safely used for high-accuracy position verification and correction. Given compensation for time trends in the clips' COM throughout the treatment course, eNAL resulted in better setup accuracies for both tumor bed and whole breast than NAL.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwa, Stefan L.S., E-mail: s.kwa@erasmusmc.nl; Al-Mamgani, Abrahim; Osman, Sarah O.S.
2015-09-01
Purpose: The purpose of this study was to verify clinical target volume–planning target volume (CTV-PTV) margins in single vocal cord irradiation (SVCI) of T1a larynx tumors and characterize inter- and intrafraction target motion. Methods and Materials: For 42 patients, a single vocal cord was irradiated using intensity modulated radiation therapy at a total dose of 58.1 Gy (16 fractions × 3.63 Gy). A daily cone beam computed tomography (CBCT) scan was performed to online correct the setup of the thyroid cartilage after patient positioning with in-room lasers (interfraction motion correction). To monitor intrafraction motion, CBCT scans were also acquired just after patient repositioning and aftermore » dose delivery. A mixed online-offline setup correction protocol (“O2 protocol”) was designed to compensate for both inter- and intrafraction motion. Results: Observed interfraction, systematic (Σ), and random (σ) setup errors in left-right (LR), craniocaudal (CC), and anteroposterior (AP) directions were 0.9, 2.0, and 1.1 mm and 1.0, 1.6, and 1.0 mm, respectively. After correction of these errors, the following intrafraction movements derived from the CBCT acquired after dose delivery were: Σ = 0.4, 1.3, and 0.7 mm, and σ = 0.8, 1.4, and 0.8 mm. More than half of the patients showed a systematic non-zero intrafraction shift in target position, (ie, the mean intrafraction displacement over the treatment fractions was statistically significantly different from zero; P<.05). With the applied CTV-PTV margins (for most patients 3, 5, and 3 mm in LR, CC, and AP directions, respectively), the minimum CTV dose, estimated from the target displacements observed in the last CBCT, was at least 94% of the prescribed dose for all patients and more than 98% for most patients (37 of 42). The proposed O2 protocol could effectively reduce the systematic intrafraction errors observed after dose delivery to almost zero (Σ = 0.1, 0.2, 0.2 mm). Conclusions: With adequate image guidance and CTV-PTV margins in LR, CC, and AP directions of 3, 5, and 3 mm, respectively, excellent target coverage in SVCI could be ensured.« less
In-Situ Cameras for Radiometric Correction of Remotely Sensed Data
NASA Astrophysics Data System (ADS)
Kautz, Jess S.
The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.
Balter, James M; Antonuk, Larry E
2008-01-01
In-room radiography is not a new concept for image-guided radiation therapy. Rapid advances in technology, however, have made this positioning method convenient, and thus radiograph-based positioning has propagated widely. The paradigms for quality assurance of radiograph-based positioning include imager performance, systems integration, infrastructure, procedure documentation and testing, and support for positioning strategy implementation.
Kenney, Laurence P; Heller, Ben W; Barker, Anthony T; Reeves, Mark L; Healey, Jamie; Good, Timothy R; Cooper, Glen; Sha, Ning; Prenton, Sarah; Liu, Anmin; Howard, David
2016-11-01
Functional electrical stimulation has been shown to be a safe and effective means of correcting foot drop of central neurological origin. Current surface-based devices typically consist of a single channel stimulator, a sensor for determining gait phase and a cuff, within which is housed the anode and cathode. The cuff-mounted electrode design reduces the likelihood of large errors in electrode placement, but the user is still fully responsible for selecting the correct stimulation level each time the system is donned. Researchers have investigated different approaches to automating aspects of setup and/or use, including recent promising work based on iterative learning techniques. This paper reports on the design and clinical evaluation of an electrode array-based FES system for the correction of drop foot, ShefStim. The paper reviews the design process from proof of concept lab-based study, through modelling of the array geometry and interface layer to array search algorithm development. Finally, the paper summarises two clinical studies involving patients with drop foot. The results suggest that the ShefStim system with automated setup produces results which are comparable with clinician setup of conventional systems. Further, the final study demonstrated that patients can use the system without clinical supervision. When used unsupervised, setup time was 14min (9min for automated search plus 5min for donning the equipment), although this figure could be reduced significantly with relatively minor changes to the design. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, H; Wang, W; Hu, W
2014-06-01
Purpose: To quantify setup errors by pretreatment kilovolt cone-beam computed tomography(KV-CBCT) scans for middle or distal esophageal carcinoma patients. Methods: Fifty-two consecutive middle or distal esophageal carcinoma patients who underwent IMRT were included this study. A planning CT scan using a big-bore CT simulator was performed in the treatment position and was used as the reference scan for image registration with CBCT. CBCT scans(On-Board Imaging v1. 5 system, Varian Medical Systems) were acquired daily during the first treatment week. A total of 260 CBCT scans was assessed with a registration clip box defined around the PTV-thorax in the reference scanmore » based on(nine CBCTs per patient) bony anatomy using Offline Review software v10.0(Varian Medical Systems). The anterior-posterior(AP), left-right(LR), superiorinferior( SI) corrections were recorded. The systematic and random errors were calculated. The CTV-to-PTV margins in each CBCT frequency was based on the Van Herk formula (2.5Σ+0.7σ). Results: The SD of systematic error (Σ) was 2.0mm, 2.3mm, 3.8mm in the AP, LR and SI directions, respectively. The average random error (σ) was 1.6mm, 2.4mm, 4.1mm in the AP, LR and SI directions, respectively. The CTV-to-PTV safety margin was 6.1mm, 7.5mm, 12.3mm in the AP, LR and SI directions based on van Herk formula. Conclusion: Our data recommend the use of 6 mm, 8mm, and 12 mm for esophageal carcinoma patient setup in AP, LR, SI directions, respectively.« less
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir
2009-06-01
Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.
Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido
2015-02-01
In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.
Calibrating page sized Gafchromic EBT3 films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crijns, W.; Maes, F.; Heide, U. A. van der
2013-01-15
Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Runxiao, L; Aikun, W; Xiaomei, F
2015-06-15
Purpose: To compare two registration methods in the CBCT guided radiotherapy for cervical carcinoma, analyze the setup errors and registration methods, determine the margin required for clinical target volume(CTV) extending to planning target volume(PTV). Methods: Twenty patients with cervical carcinoma were enrolled. All patients were underwent CT simulation in the supine position. Transfering the CT images to the treatment planning system and defining the CTV, PTV and the organs at risk (OAR), then transmit them to the XVI workshop. CBCT scans were performed before radiotherapy and registered to planning CT images according to bone and gray value registration methods. Comparedmore » two methods and obtain left-right(X), superior-inferior(Y), anterior-posterior (Z) setup errors, the margin required for CTV to PTV were calculated. Results: Setup errors were unavoidable in postoperative cervical carcinoma irradiation. The setup errors measured by method of bone (systemic ± random) on X(1eft.right),Y(superior.inferior),Z(anterior.posterior) directions were(0.24±3.62),(0.77±5.05) and (0.13±3.89)mm, respectively, the setup errors measured by method of grey (systemic ± random) on X(1eft-right), Y(superior-inferior), Z(anterior-posterior) directions were(0.31±3.93), (0.85±5.16) and (0.21±4.12)mm, respectively.The spatial distributions of setup error was maximum in Y direction. The margins were 4 mm in X axis, 6 mm in Y axis, 4 mm in Z axis respectively.These two registration methods were similar and highly recommended. Conclusion: Both bone and grey registration methods could offer an accurate setup error. The influence of setup errors of a PTV margin would be suggested by 4mm, 4mm and 6mm on X, Y and Z directions for postoperative radiotherapy for cervical carcinoma.« less
Thermal transmission of camouflage nets revisited
NASA Astrophysics Data System (ADS)
Jersblad, Johan; Jacobs, Pieter
2016-10-01
In this article we derive, from first principles, the correct formula for thermal transmission of a camouflage net, based on the setup described in the US standard for lightweight camouflage nets. Furthermore, we compare the results and implications with the use of an incorrect formula that have been seen in several recent tenders. It is shown that the incorrect formulation not only gives rise to large errors, but the result also depends on the surrounding room temperature, which in the correct derivation cancels out. The theoretical results are compared with laboratory measurements. The theoretical results agree with the laboratory results for the correct derivation. To summarize we discuss the consequences for soldiers on the battlefield if incorrect standards and test methods are used in procurement processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Oh, S; Yea, J
Purpose: This study evaluated the setup uncertainties for brain sites when using BrainLAB’s ExacTrac X-ray 6D system for daily pretreatment to determine the optimal planning target volume (PTV) margin. Methods: Between August 2012 and April 2015, 28 patients with brain tumors were treated by daily image-guided radiotherapy using the BrainLAB ExacTrac 6D image guidance system of the Novalis-Tx linear accelerator. DUONTM (Orfit Industries, Wijnegem, Belgium) masks were used to fix the head. The radiotherapy was fractionated into 27–33 treatments. In total, 844 image verifications were performed for 28 patients and used for the analysis. The setup corrections along with themore » systematic and random errors were analyzed for six degrees of freedom in the translational (lateral, longitudinal, and vertical) and rotational (pitch, roll, and yaw) dimensions. Results: Optimal PTV margins were calculated based on van Herk et al.’s [margin recipe = 2.5∑ + 0.7σ − 3 mm] and Stroom et al.’s [margin recipe = 2∑ + 0.7σ] formulas. The systematic errors (∑) were 0.72, 1.57, and 0.97 mm in the lateral, longitudinal, and vertical translational dimensions, respectively, and 0.72°, 0.87°, and 0.83° in the pitch, roll, and yaw rotational dimensions, respectively. The random errors (σ) were 0.31, 0.46, and 0.54 mm in the lateral, longitudinal, and vertical rotational dimensions, respectively, and 0.28°, 0.24°, and 0.31° in the pitch, roll, and yaw rotational dimensions, respectively. According to van Herk et al.’s and Stroom et al.’s recipes, the recommended lateral PTV margins were 0.97 and 1.66 mm, respectively; the longitudinal margins were 1.26 and 3.47 mm, respectively; and the vertical margins were 0.21 and 2.31 mm, respectively. Conclusion: Therefore, daily setup verifications using the BrainLAB ExacTrac 6D image guide system are very useful for evaluating the setup uncertainties and determining the setup margin.∑σ.« less
Lovelock, D Michael; Hua, Chiaho; Wang, Ping; Hunt, Margie; Fournier-Bidoz, Nathalie; Yenice, Kamil; Toner, Sean; Lutz, Wendell; Amols, Howard; Bilsky, Mark; Fuks, Zvi; Yamada, Yoshiya
2005-08-01
Because of the proximity of the spinal cord, effective radiotherapy of paraspinal tumors to high doses requires highly conformal dose distributions, accurate patient setup, setup verification, and patient immobilization. An immobilization cradle has been designed to facilitate the rapid setup and radiation treatment of patients with paraspinal disease. For all treatments, patients were set up to within 2.5 mm of the design using an amorphous silicon portal imager. Setup reproducibility of the target using the cradle and associated clinical procedures was assessed by measuring the setup error prior to any correction. From 350 anterior/posterior images, and 303 lateral images, the standard deviations, as determined by the imaging procedure, were 1.3 m, 1.6 m, and 2.1 in the ant/post, right/left, and superior/inferior directions. Immobilization was assessed by measuring patient shifts between localization images taken before and after treatment. From 67 ant/post image pairs and 49 lateral image pairs, the standard deviations were found to be less than 1 mm in all directions. Careful patient positioning and immobilization has enabled us to develop a successful clinical program of high dose, conformal radiotherapy of paraspinal disease using a conventional Linac equipped with dynamic multileaf collimation and an amorphous silicon portal imager.
NASA Astrophysics Data System (ADS)
Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.
2018-03-01
Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.
NASA Astrophysics Data System (ADS)
Edvardsson, A.; Ceberg, S.
2013-06-01
The aim of this study was 1) to investigate interfraction set-up uncertainties for patients treated with respiratory gating for left-sided breast cancer, 2) to investigate the effect of the inter-fraction set-up on the absorbed dose-distribution for the target and organs at risk (OARs) and 3) optimize the set-up correction strategy. By acquiring multiple set-up images the systematic set-up deviation was evaluated. The effect of the systematic set-up deviation on the absorbed dose distribution was evaluated by 1) simulation in the treatment planning system and 2) measurements with a biplanar diode array. The set-up deviations could be decreased using a no action level correction strategy. Not using the clinically implemented adaptive maximum likelihood factor for the gating patients resulted in better set-up. When the uncorrected set-up deviations were simulated the average mean absorbed dose was increased from 1.38 to 2.21 Gy for the heart, 4.17 to 8.86 Gy to the left anterior descending coronary artery and 5.80 to 7.64 Gy to the left lung. Respiratory gating can induce systematic set-up deviations which would result in increased mean absorbed dose to the OARs if not corrected for and should therefore be corrected for by an appropriate correction strategy.
NASA Astrophysics Data System (ADS)
Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang
2013-11-01
Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.
Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L
2015-07-07
Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.
Influence of nuclear interactions in body tissues on tumor dose in carbon-ion radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inaniwa, T., E-mail: taku@nirs.go.jp; Kanematsu, N.; Tsuji, H.
2015-12-15
Purpose: In carbon-ion radiotherapy treatment planning, the planar integrated dose (PID) measured in water is applied to the patient dose calculation with density scaling using the stopping power ratio. Since body tissues are chemically different from water, this dose calculation can be subject to errors, particularly due to differences in inelastic nuclear interactions. In recent studies, the authors proposed and validated a PID correction method for these errors. In the present study, the authors used this correction method to assess the influence of these nuclear interactions in body tissues on tumor dose in various clinical cases. Methods: Using 10–20 casesmore » each of prostate, head and neck (HN), bone and soft tissue (BS), lung, liver, pancreas, and uterine neoplasms, the authors first used treatment plans for carbon-ion radiotherapy without nuclear interaction correction to derive uncorrected dose distributions. The authors then compared these distributions with recalculated distributions using the nuclear interaction correction (corrected dose distributions). Results: Median (25%/75% quartiles) differences between the target mean uncorrected doses and corrected doses were 0.2% (0.1%/0.2%), 0.0% (0.0%/0.0%), −0.3% (−0.4%/−0.2%), −0.1% (−0.2%/−0.1%), −0.1% (−0.2%/0.0%), −0.4% (−0.5%/−0.1%), and −0.3% (−0.4%/0.0%) for the prostate, HN, BS, lung, liver, pancreas, and uterine cases, respectively. The largest difference of −1.6% in target mean and −2.5% at maximum were observed in a uterine case. Conclusions: For most clinical cases, dose calculation errors due to the water nonequivalence of the tissues in nuclear interactions would be marginal compared to intrinsic uncertainties in treatment planning, patient setup, beam delivery, and clinical response. In some extreme cases, however, these errors can be substantial. Accordingly, this correction method should be routinely applied to treatment planning in clinical practice.« less
More irregular eye shape in low myopia than in emmetropia.
Tabernero, Juan; Schaeffel, Frank
2009-09-01
To improve the description of the peripheral eye shape in myopia and emmetropia by using a new method for continuous measurement of the peripheral refractive state. A scanning photorefractor was designed to record refractive errors in the vertical pupil meridian across the horizontal visual field (up to +/-45 degrees ). The setup consists of a hot mirror that continuously projects the infrared light from a photoretinoscope under different angles of eccentricity into the eye. The movement of the mirror is controlled by using two stepping motors. Refraction in a group of 17 emmetropic subjects and 11 myopic subjects (mean, -4.3 D; SD, 1.7) was measured without spectacle correction. For the analysis of eye shape, the refractive error versus the eccentricity angles was fitted with different polynomials (from second to tenth order). The new setup presents some important advantages over previous techniques: The subject does not have to change gaze during the measurements, and a continuous profile is obtained rather than discrete points. There was a significant difference in the fitting errors between the subjects with myopia and those with emmetropia. Tenth-order polynomials were required in myopic subjects to achieve a quality of fit similar to that in emmetropic subjects fitted with only sixth-order polynomials. Apparently, the peripheral shape of the myopic eye is more "bumpy." A new setup is presented for obtaining continuous peripheral refraction profiles. It was found that the peripheral retinal shape is more irregular even in only moderately myopic eyes, perhaps because the sclera lost some rigidity even at the early stage of myopia.
Rosetta Navigation at its Mars Swing-By
NASA Technical Reports Server (NTRS)
Budnik, Frank; Morley, Trevor
2007-01-01
This paper reports on the navigation activities during Rosetta s Mars swing-by. It covers the Mars approach phase starting after a deterministic deep-space maneuver in September 2006, the swing-by proper on 25 February 2007, and ends with another deterministic deep-space maneuver in April 2007 which was also foreseen to compensate any navigation error. Emphasis is put on the orbit determination and prediction set-up and the evolution of the targeting estimates in the B-plane and their adjustments by trajectory correction maneuvers.
Hansen, Helle; Nielsen, Berit Kjærside; Boejen, Annette; Vestergaard, Anne
2018-06-01
The aim of this study was to investigate if teaching patients about positioning before radiotherapy treatment would (a) reduce the residual rotational set-up errors, (b) reduce the number of repositionings and (c) improve patients' sense of control by increasing self-efficacy and reducing distress. Patients were randomized to either standard care (control group) or standard care and a teaching session combining visual aids and practical exercises (intervention group). Daily images from the treatment sessions were evaluated off-line. Both groups filled in a questionnaire before and at the end of the treatment course on various aspects of cooperation with the staff regarding positioning. Comparisons of residual rotational set-up errors showed an improvement in the intervention group compared to the control group. No significant differences were found in number of repositionings, self-efficacy or distress. Results show that it is possible to teach patients about positioning and thereby improve precision in positioning. Teaching patients about positioning did not seem to affect self-efficacy or distress scores at baseline and at the end of the treatment course.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H
Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul
2014-01-01
Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
Experimental implementation of the Bacon-Shor code with 10 entangled photons
NASA Astrophysics Data System (ADS)
Gimeno-Segovia, Mercedes; Sanders, Barry C.
The number of qubits that can be effectively controlled in quantum experiments is growing, reaching a regime where small quantum error-correcting codes can be tested. The Bacon-Shor code is a simple quantum code that protects against the effect of an arbitrary single-qubit error. In this work, we propose an experimental implementation of said code in a post-selected linear optical setup, similar to the recently reported 10-photon GHZ generation experiment. In the procedure we propose, an arbitrary state is encoded into the protected Shor code subspace, and after undergoing a controlled single-qubit error, is successfully decoded. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center(NSF Grant PHY-1125565) with support of the Moore Foundation(GBMF-2644).
Elongation measurement using 1-dimensional image correlation method
NASA Astrophysics Data System (ADS)
Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan
2016-11-01
Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.
NASA Astrophysics Data System (ADS)
Maloney, Chris; Lormeau, Jean Pierre; Dumas, Paul
2016-07-01
Many astronomical sensing applications operate in low-light conditions; for these applications every photon counts. Controlling mid-spatial frequencies and surface roughness on astronomical optics are critical for mitigating scattering effects such as flare and energy loss. By improving these two frequency regimes higher contrast images can be collected with improved efficiency. Classically, Magnetorheological Finishing (MRF) has offered an optical fabrication technique to correct low order errors as well has quilting/print-through errors left over in light-weighted optics from conventional polishing techniques. MRF is a deterministic, sub-aperture polishing process that has been used to improve figure on an ever expanding assortment of optical geometries, such as planos, spheres, on and off axis aspheres, primary mirrors and freeform optics. Precision optics are routinely manufactured by this technology with sizes ranging from 5-2,000mm in diameter. MRF can be used for form corrections; turning a sphere into an asphere or free form, but more commonly for figure corrections achieving figure errors as low as 1nm RMS while using careful metrology setups. Recent advancements in MRF technology have improved the polishing performance expected for astronomical optics in low, mid and high spatial frequency regimes. Deterministic figure correction with MRF is compatible with most materials, including some recent examples on Silicon Carbide and RSA905 Aluminum. MRF also has the ability to produce `perfectly-bad' compensating surfaces, which may be used to compensate for measured or modeled optical deformation from sources such as gravity or mounting. In addition, recent advances in MRF technology allow for corrections of mid-spatial wavelengths as small as 1mm simultaneously with form error correction. Efficient midspatial frequency corrections make use of optimized process conditions including raster polishing in combination with a small tool size. Furthermore, a novel MRF fluid, called C30, has been developed to finish surfaces to ultra-low roughness (ULR) and has been used as the low removal rate fluid required for fine figure correction of mid-spatial frequency errors. This novel MRF fluid is able to achieve <4Å RMS on Nickel-plated Aluminum and even <1.5Å RMS roughness on Silicon, Fused Silica and other materials. C30 fluid is best utilized within a fine figure correction process to target mid-spatial frequency errors as well as smooth surface roughness 'for free' all in one step. In this paper we will discuss recent advancements in MRF technology and the ability to meet requirements for precision optics in low, mid and high spatial frequency regimes and how improved MRF performance addresses the need for achieving tight specifications required for astronomical optics.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Ohtakara, Kazuhiro; Hayashi, Shinya; Tanaka, Hidekazu; Hoshi, Hiroaki; Kitahara, Masashi; Matsuyama, Katsuya; Okada, Hitoshi
2012-02-01
To compare the positioning accuracy and stability of two distinct noninvasive immobilization devices, a dedicated (D-) and conventional (C-) mask, and to evaluate the applicability of a 6-degrees-of-freedom (6D) correction, especially to the C-mask, based on our initial experience with cranial stereotactic radiotherapy (SRT) using ExacTrac (ET)/Robotics integrated into the Novalis Tx platform. The D- and C-masks were the BrainLAB frameless mask system and a general thermoplastic mask used for conventional radiotherapy such as whole brain irradiation, respectively. A total of 148 fractions in 71 patients and 125 fractions in 20 patients were analyzed for the D- and C-masks, respectively. For the C-mask, 3D correction was applied to the initial 10 patients, and thereafter, 6D correction was adopted. The 6D residual errors (REs) in the initial setup, after correction (pre-treatment), and during post-treatment were measured and compared. The D-mask provided no significant benefit for initial setup. The post-treatment median 3D vector displacements (interquatile range) were 0.38 mm (0.22, 0.60) and 0.74 mm (0.49, 1.04) for the D- and C-masks, respectively (p<0.001). The post-treatment maximal translational REs were within 1 mm and 2 mm for the D- and C-masks, respectively, and notably within 1.5 mm for the C-mask with 6D correction. The pre-treatment 3D vector displacements were significantly correlated with those for post-treatment in both masks. The D-mask confers positional stability acceptable for SRT. For the C-mask, 6D correction is also recommended, and an additional setup margin of 0.5 mm to that for the D-mask would be sufficient. The tolerance levels for the pre-treatment REs should similarly be set as small as possible for both systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gierga, David P., E-mail: dgierga@partners.org; Harvard Medical School, Boston, Massachusetts; Turcotte, Julie C.
2012-12-01
Purpose: Breath-hold (BH) treatments can be used to reduce cardiac dose for patients with left-sided breast cancer and unfavorable cardiac anatomy. A surface imaging technique was developed for accurate patient setup and reproducible real-time BH positioning. Methods and Materials: Three-dimensional surface images were obtained for 20 patients. Surface imaging was used to correct the daily setup for each patient. Initial setup data were recorded for 443 fractions and were analyzed to assess random and systematic errors. Real time monitoring was used to verify surface placement during BH. The radiation beam was not turned on if the BH position difference wasmore » greater than 5 mm. Real-time surface data were analyzed for 2398 BHs and 363 treatment fractions. The mean and maximum differences were calculated. The percentage of BHs greater than tolerance was calculated. Results: The mean shifts for initial patient setup were 2.0 mm, 1.2 mm, and 0.3 mm in the vertical, longitudinal, and lateral directions, respectively. The mean 3-dimensional vector shift was 7.8 mm. Random and systematic errors were less than 4 mm. Real-time surface monitoring data indicated that 22% of the BHs were outside the 5-mm tolerance (range, 7%-41%), and there was a correlation with breast volume. The mean difference between the treated and reference BH positions was 2 mm in each direction. For out-of-tolerance BHs, the average difference in the BH position was 6.3 mm, and the average maximum difference was 8.8 mm. Conclusions: Daily real-time surface imaging ensures accurate and reproducible positioning for BH treatment of left-sided breast cancer patients with unfavorable cardiac anatomy.« less
Joint de-blurring and nonuniformity correction method for infrared microscopy imaging
NASA Astrophysics Data System (ADS)
Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban
2018-05-01
In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.
Batumalai, Vikneswary; Phan, Penny; Choong, Callie; Holloway, Lois; Delaney, Geoff P
2016-12-01
To compare the differences in setup errors measured with electronic portal image (EPI) and cone-beam computed tomography (CBCT) in patients undergoing tangential breast radiotherapy (RT). Relationship between setup errors, body mass index (BMI) and breast size was assessed. Twenty-five patients undergoing postoperative RT to the breast were consented for this study. Weekly CBCT scans were acquired and retrospectively registered to the planning CT in three dimensions, first using bony anatomy for bony registration (CBCT-B) and again using breast tissue outline for soft tissue registration (CBCT-S). Digitally reconstructed radiographs (DRR) generated from CBCT to simulate EPI were compared to the planning DRR using bony anatomy in the V (parallel to the cranio-caudal axis) and U (perpendicular to V) planes. The systematic (Σ) and random (σ) errors were calculated and correlated with BMI and breast size. The systematic and random errors for EPI (Σ V = 3.7 mm, Σ U = 2.8 mm and σ V = 2.9 mm, σ U = 2.5) and CBCT-B (Σ V = 3.5 mm, Σ U = 3.4 mm and σ V = 2.8 mm, σ U = 2.8) were of similar magnitude in the V and U planes. Similarly, the differences in setup errors for CBCT-B and CBCT-S in three dimensions were less than 1 mm. Only CBCT-S setup error correlated with BMI and breast size. CBCT and EPI show insignificant variation in their ability to detect setup error. These findings suggest no significant differences that would make one modality considered superior over the other and EPI should remain the standard of care for most patients. However, there is a correlation with breast size, BMI and setup error as detected by CBCT-S, justifying the use of CBCT-S for larger patients. © 2016 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
Fabrication of ф 160 mm convex hyperbolic mirror for remote sensing instrument
NASA Astrophysics Data System (ADS)
Kuo, Ching-Hsiang; Yu, Zong-Ru; Ho, Cheng-Fang; Hsu, Wei-Yao; Chen, Fong-Zhi
2012-10-01
In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the Fabrication of ф160 mm Convex Hyperbolic Mirror for Remote Sensing Instrument160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.
The effect of monitor raster latency on VEPs, ERPs and Brain-Computer Interface performance.
Nagel, Sebastian; Dreher, Werner; Rosenstiel, Wolfgang; Spüler, Martin
2018-02-01
Visual neuroscience experiments and Brain-Computer Interface (BCI) control often require strict timings in a millisecond scale. As most experiments are performed using a personal computer (PC), the latencies that are introduced by the setup should be taken into account and be corrected. As a standard computer monitor uses a rastering to update each line of the image sequentially, this causes a monitor raster latency which depends on the position, on the monitor and the refresh rate. We technically measured the raster latencies of different monitors and present the effects on visual evoked potentials (VEPs) and event-related potentials (ERPs). Additionally we present a method for correcting the monitor raster latency and analyzed the performance difference of a code-modulated VEP BCI speller by correcting the latency. There are currently no other methods validating the effects of monitor raster latency on VEPs and ERPs. The timings of VEPs and ERPs are directly affected by the raster latency. Furthermore, correcting the raster latency resulted in a significant reduction of the target prediction error from 7.98% to 4.61% and also in a more reliable classification of targets by significantly increasing the distance between the most probable and the second most probable target by 18.23%. The monitor raster latency affects the timings of VEPs and ERPs, and correcting resulted in a significant error reduction of 42.23%. It is recommend to correct the raster latency for an increased BCI performance and methodical correctness. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jung, Jae Hong; Jung, Joo-Young; Bae, Sun Hyun; Moon, Seong Kwon; Cho, Kwang Hwan
2016-10-01
The purpose of this study was to compare patient setup deviations for different image-guided protocols (weekly vs. biweekly) that are used in TomoDirect three-dimensional conformal radiotherapy (TD-3DCRT) for whole-breast radiation therapy (WBRT). A total of 138 defined megavoltage computed tomography (MVCT) image sets from 46 breast cancer cases were divided into two groups based on the imaging acquisition times: weekly or biweekly. The mean error, three-dimensional setup displacement error (3D-error), systematic error (Σ), and random error (σ) were calculated for each group. The 3D-errors were 4.29 ± 1.11 mm and 5.02 ± 1.85 mm for the weekly and biweekly groups, respectively; the biweekly error was 14.6% higher than the weekly error. The systematic errors in the roll angle and the x, y, and z directions were 0.48°, 1.72 mm, 2.18 mm, and 1.85 mm for the weekly protocol and 0.21°, 1.24 mm, 1.39 mm, and 1.85 mm for the biweekly protocol. Random errors in the roll angle and the x, y, and z directions were 25.7%, 40.6%, 40.0%, and 40.8% higher in the biweekly group than in the weekly group. For the x, y, and z directions, the distributions of the treatment frequency at less than 5 mm were 98.6%, 91.3%, and 94.2% in the weekly group and 94.2%, 89.9%, and 82.6% in the biweekly group. Moreover, the roll angles with 0 - 1° were 79.7% and 89.9% in the weekly and the biweekly groups, respectively. Overall, the evaluation of setup deviations for the two protocols revealed no significant differences (p > 0.05). Reducing the frequency of MVCT imaging could have promising effects on imaging doses and machine times during treatment. However, the biweekly protocol was associated with increased random setup deviations in the treatment. We have demonstrated a biweekly protocol of TD-3DCRT for WBRT, and we anticipate that our method may provide an alternative approach for considering the uncertainties in the patient setup.
Corneal topometry by fringe projection: limits and possibilities
NASA Astrophysics Data System (ADS)
Windecker, Robert; Tiziani, Hans J.; Thiel, H.; Jean, Benedikt J.
1996-01-01
A fast and accurate measurement of corneal topography is an important task especially since laser induced corneal reshaping has been used for the correction of ametropia. The classical measuring system uses Placido rings for the measurement and calculation of the topography or local curvatures. Another approach is the projection of a known fringe map to be imaged onto the surface under a certain angle of incidence. We present a set-up using telecentric illumination and detection units. With a special grating we get a synthetic wavelength with a nearly sinusoidal profile. In combination with a very fast data acquisition the topography can be evaluated using as special selfnormalizing phase evaluation algorithm. It calculates local Fourier coefficients and corrects errors caused by imperfect illumination or inhomogeneous scattering by fringe normalization. The topography can be determined over 700 by 256 pixel. The set-up is suitable to measure optically rough silicon replica of the human cornea as well as the cornea in vivo over a field of 8 mm and more. The resolution is mainly limited by noise and is better than two micrometers. We discuss the principle benefits and the drawbacks compared with standard Placido technique.
Using GPU parallelization to perform realistic simulations of the LPCTrap experiments
NASA Astrophysics Data System (ADS)
Fabian, X.; Mauger, F.; Quéméner, G.; Velten, Ph.; Ban, G.; Couratin, C.; Delahaye, P.; Durand, D.; Fabre, B.; Finlay, P.; Fléchard, X.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pons, B.; Porobic, T.; Severijns, N.; Thomas, J. C.
2015-11-01
The LPCTrap setup is a sensitive tool to measure the β - ν angular correlation coefficient, a β ν , which can yield the mixing ratio ρ of a β decay transition. The latter enables the extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V u d . In such a measurement, the most relevant observable is the energy distribution of the recoiling daughter nuclei following the nuclear β decay, which is obtained using a time-of-flight technique. In order to maximize the precision, one can reduce the systematic errors through a thorough simulation of the whole set-up, especially with a correct model of the trapped ion cloud. This paper presents such a simulation package and focuses on the ion cloud features; particular attention is therefore paid to realistic descriptions of trapping field dynamics, buffer gas cooling and the N-body space charge effects.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-10-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-01-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements. PMID:23724372
Feasibility study of patient motion monitoring by using tactile array sensors
NASA Astrophysics Data System (ADS)
Kim, Tae-Ho; Kang, Seong-Hee; Kim, Dong-Su; Cho, Min-Seok; Kim, Kyeong-Hyeon; Suh, Tae-Suk; Kim, Siyong
2015-07-01
An ideal alignment method based on the external anatomical surface of the patient should consider the entire region of interest. However, optical-camera-based systems cannot blindly monitor such areas as the patient's back, for example. Furthermore, collecting enough information to correct the associated deformation error is impossible. The study aim is to propose a new patient alignment method using tactile array sensors that can measure the distributed pressure profiles along the contact surface. The TactArray system includes one sensor, a signal-conditioning device (USB drive/interface electronics, power supply, and cables), and a PC. The tactile array sensor was placed between the patient's back and the treatment couch, and the deformations at different location on the patient's back were evaluated. Three healthy male volunteers were enrolled in this study, and pressure profile distributions (PPDs) were obtained with and without immobilization. After the initial pretreatment setup using the laser alignment system, the PPD of the patient's back was acquired. The results were obtained at four different times and included a reference PPD dataset. The contact area and the center-of-pressure value were also acquired based on the PPD data for a more elaborate quantitative data analysis. To evaluate the clinical feasibility of using the proposed alignment method for reducing the deformation error, we implemented a real-time self-correction procedure. Despite the initial alignment, we confirmed that PPD variations existed in both cases of the volunteer studies (with and without the use of the immobilization tool). Additionally, we confirmed that the contact area and the center of pressure varied in both cases, and those variations were observed in all three volunteers. With the proposed alignment method and the real-time selfcorrection procedure, the deformation error was significantly reduced. The proposed alignment method can be used to account for the limitation of the camera-based system and to improve the accuracy of the external surface-based patient setup.
Evaluation of kidney motion and target localization in abdominal SBRT patients
Sonier, Marcus; Chu, William; Lalani, Nafisha; Erler, Darby; Cheung, Patrick
2016-01-01
The purpose of this study was to evaluate bilateral kidney and target translational/rotational intrafraction motion during stereotactic body radiation therapy treatment delivery of primary renal cell carcinoma and oligometastatic adrenal lesions for patients immobilized in the Elekta BodyFIX system. Bilateral kidney motion was assessed at midplane for 30 patients immobilized in a full‐body dual‐vacuum‐cushion system with two patients immobilized via abdominal compression. Intrafraction motion was assessed for 15 patients using kilovoltage cone‐beam computed tomography (kV‐CBCT) datasets (n=151) correlated to the planning CT. Patient positioning was corrected for translational and rotational misalignments using a robotic couch in six degrees of freedom if setup errors exceeded 1 mm and 1°. Absolute bilateral kidney motion between inhale and exhale 4D CT imaging phases for left–right (LR), superior–inferior (SI), and anterior–posterior (AP) directions was 1.51±1.00mm,8.10±4.33mm, and 3.08±2.11mm, respectively. Residual setup error determined across CBCT type (pretreatment, intrafraction, and post‐treatment) for x (LR), y (SI), and z (AP) translations was 0.63±0.74mm,1.08±1.38mm, and 0.70±1.00mm; while for x (pitch), y (roll), and z (yaw) rotations was 0.24±0.39°,0.19±0.34°, and 0.26±0.43°, respectively. Targets were localized to within 2.1 mm and 0.8° 95% of the time. The frequency of misalignments in the y direction was significant (p<0.05) when compared to the x and z directions with no significant difference in translations between IMRT and VMAT. This technique is robust using BodyFIX for patient immobilization and reproducible localization of kidney and adrenal targets and daily CBCT image guidance for correction of positional errors to maintain treatment accuracy. PACS number(s): 87.55.‐x, 87.56.‐v, 87.56.Da PMID:27929514
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja
2013-02-01
Purpose: To investigate the applicability of 3-dimensional (3D) surface imaging for image guidance in deep-inspiration breath-hold radiation therapy (DIBH-RT) for patients with left-sided breast cancer. For this purpose, setup data based on captured 3D surfaces was compared with setup data based on cone beam computed tomography (CBCT). Methods and Materials: Twenty patients treated with DIBH-RT after breast-conserving surgery (BCS) were included. Before the start of treatment, each patient underwent a breath-hold CT scan for planning purposes. During treatment, dose delivery was preceded by setup verification using CBCT of the left breast. 3D surfaces were captured by a surface imaging systemmore » concurrently with the CBCT scan. Retrospectively, surface registrations were performed for CBCT to CT and for a captured 3D surface to CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic error, random error, and 95% limits of agreement were calculated. Furthermore, receiver operating characteristic (ROC) analysis was performed. Results: Good correlation between setup errors was found: R{sup 2}=0.70, 0.90, 0.82 in left-right, craniocaudal, and anterior-posterior directions, respectively. Systematic errors were {<=}0.17 cm in all directions. Random errors were {<=}0.15 cm. The limits of agreement were -0.34-0.48, -0.42-0.39, and -0.52-0.23 cm in left-right, craniocaudal, and anterior-posterior directions, respectively. ROC analysis showed that a threshold between 0.4 and 0.8 cm corresponds to promising true positive rates (0.78-0.95) and false positive rates (0.12-0.28). Conclusions: The results support the application of 3D surface imaging for image guidance in DIBH-RT after BCS.« less
Jin, Peng; van der Horst, Astrid; de Jong, Rianne; van Hooft, Jeanin E; Kamphuis, Martijn; van Wieringen, Niek; Machiels, Melanie; Bel, Arjan; Hulshof, Maarten C C M; Alderliesten, Tanja
2015-12-01
The aim of this study was to quantify interfractional esophageal tumor position variation using markers and investigate the use of markers for setup verification. Sixty-five markers placed in the tumor volumes of 24 esophageal cancer patients were identified in computed tomography (CT) and follow-up cone-beam CT. For each patient we calculated pairwise distances between markers over time to evaluate geometric tumor volume variation. We then quantified marker displacements relative to bony anatomy and estimated the variation of systematic (Σ) and random errors (σ). During bony anatomy-based setup verification, we visually inspected whether the markers were inside the planning target volume (PTV) and attempted marker-based registration. Minor time trends with substantial fluctuations in pairwise distances implied tissue deformation. Overall, Σ(σ) in the left-right/cranial-caudal/anterior-posterior direction was 2.9(2.4)/4.1(2.4)/2.2(1.8) mm; for the proximal stomach, it was 5.4(4.3)/4.9(3.2)/1.9(2.4) mm. After bony anatomy-based setup correction, all markers were inside the PTV. However, due to large tissue deformation, marker-based registration was not feasible. Generally, the interfractional position variation of esophageal tumors is more pronounced in the cranial-caudal direction and in the proximal stomach. Currently, marker-based setup verification is not feasible for clinical routine use, but markers can facilitate the setup verification by inspecting whether the PTV covers the tumor volume adequately. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Weakly-tunable transmon qubits in a multi-qubit architecture
NASA Astrophysics Data System (ADS)
Hertzberg, Jared; Bronn, Nicholas; Corcoles, Antonio; Brink, Markus; Keefe, George; Takita, Maika; Hutchings, M.; Plourde, B. L. T.; Gambetta, Jay; Chow, Jerry
Quantum error-correction employing a 2D lattice of qubits requires a strong coupling between adjacent qubits and consistently high gate fidelity among them. In such a system, all-microwave cross-resonance gates offer simplicity of setup and operation. However, the relative frequencies of adjacent qubits must be carefully arranged in order to optimize gate rates and eliminate unwanted couplings. We discuss the incorporation of weakly-flux-tunable transmon qubits into such an architecture. Using DC tuning through filtered flux-bias lines, we adjust qubit frequencies while minimizing the effects of flux noise on decoherence.
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.
2016-01-01
Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151
Optics measurement and correction during beam acceleration in the Relativistic Heavy Ion Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Marusic, A.; Minty, M.
2014-09-09
To minimize operational complexities, setup of collisions in high energy circular colliders typically involves acceleration with near constant β-functions followed by application of strong focusing quadrupoles at the interaction points (IPs) for the final beta-squeeze. At the Relativistic Heavy Ion Collider (RHIC) beam acceleration and optics squeeze are performed simultaneously. In the past, beam optics correction at RHIC has taken place at injection and at final energy with some interpolation of corrections into the acceleration cycle. Recent measurements of the beam optics during acceleration and squeeze have evidenced significant beta-beats which if corrected could minimize undesirable emittance dilutions and maximizemore » the spin polarization of polarized proton beams by avoidance of higher-order multipole fields sampled by particles within the bunch. In this report the methodology now operational at RHIC for beam optics corrections during acceleration with simultaneous beta-squeeze will be presented together with measurements which conclusively demonstrate the superior beam control. As a valuable by-product, the corrections have minimized the beta-beat at the profile monitors so reducing the dominant error in and providing more precise measurements of the evolution of the beam emittances during acceleration.« less
Vidovic, Luka; Majaron, Boris
2014-02-01
Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.
Li, Chengshuai; Chen, Shichao; Klemba, Michael; Zhu, Yizheng
2016-09-01
A dual-modality birefringence/phase imaging system is presented. The system features a crystal retarder that provides polarization mixing and generates two interferometric carrier waves in a single signal spectrum. The retardation and orientation of sample birefringence can then be measured simultaneously based on spectral multiplexing interferometry. Further, with the addition of a Nomarski prism, the same setup can be used for quantitative differential interference contrast (DIC) imaging. Sample phase can then be obtained with two-dimensional integration. In addition, birefringence-induced phase error can be corrected using the birefringence data. This dual-modality approach is analyzed theoretically with Jones calculus and validated experimentally with malaria-infected red blood cells. The system generates not only corrected DIC and phase images, but a birefringence map that highlights the distribution of hemozoin crystals.
NASA Astrophysics Data System (ADS)
Li, Chengshuai; Chen, Shichao; Klemba, Michael; Zhu, Yizheng
2016-09-01
A dual-modality birefringence/phase imaging system is presented. The system features a crystal retarder that provides polarization mixing and generates two interferometric carrier waves in a single signal spectrum. The retardation and orientation of sample birefringence can then be measured simultaneously based on spectral multiplexing interferometry. Further, with the addition of a Nomarski prism, the same setup can be used for quantitative differential interference contrast (DIC) imaging. Sample phase can then be obtained with two-dimensional integration. In addition, birefringence-induced phase error can be corrected using the birefringence data. This dual-modality approach is analyzed theoretically with Jones calculus and validated experimentally with malaria-infected red blood cells. The system generates not only corrected DIC and phase images, but a birefringence map that highlights the distribution of hemozoin crystals.
Correction for Thermal EMFs in Thermocouple Feedthroughs
NASA Technical Reports Server (NTRS)
Ziemke, Robert A.
2006-01-01
A straightforward measurement technique provides for correction of thermal-electromotive-force (thermal-EMF) errors introduced by temperature gradients along the pins of non-thermocouple-alloy hermetic feedthrough connectors for thermocouple extension wires that must pass through bulkheads. This technique is an alternative to the traditional technique in which the thermal-EMF errors are eliminated by use of custom-made multipin hermetic feedthrough connectors that contain pins made of the same alloys as those of the thermocouple extension wires. One disadvantage of the traditional technique is that it is expensive and time-consuming to fabricate multipin custom thermocouple connectors. In addition, the thermocouple-alloy pins in these connectors tend to corrode easily and/or tend to be less rugged compared to the non-thermocouple-alloy pins of ordinary connectors. As the number of thermocouples (and thus pins) is increased in a given setup, the magnitude of these disadvantages increases accordingly. The present technique is implemented by means of a little additional hardware and software, the cost of which is more than offset by the savings incurred through the use of ordinary instead of thermocouple connectors. The figure schematically depicts a typical measurement setup to which the technique is applied. The additional hardware includes an isothermal block (made of copper) instrumented with a reference thermocouple and a compensation thermocouple. The reference thermocouple is connected to an external data-acquisition system (DAS) through a two-pin thermocouple-alloy hermetic feedthrough connector, but this is the only such connector in the apparatus. The compensation thermocouple is connected to the DAS through two pins of the same ordinary multipin connector that connects the measurement thermocouples to the DAS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jani, S; Low, D; Lamb, J
2015-06-15
Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments weremore » simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and automatically detected using 3D setup images of two imaging modalities across three commonly-treated anatomical sites.« less
Vitikainen, Anne-Mari; Mäkelä, Elina; Lioumis, Pantelis; Jousmäki, Veikko; Mäkelä, Jyrki P
2015-09-30
The use of navigated repetitive transcranial magnetic stimulation (rTMS) in mapping of speech-related brain areas has recently shown to be useful in preoperative workflow of epilepsy and tumor patients. However, substantial inter- and intraobserver variability and non-optimal replicability of the rTMS results have been reported, and a need for additional development of the methodology is recognized. In TMS motor cortex mappings the evoked responses can be quantitatively monitored by electromyographic recordings; however, no such easily available setup exists for speech mappings. We present an accelerometer-based setup for detection of vocalization-related larynx vibrations combined with an automatic routine for voice onset detection for rTMS speech mapping applying naming. The results produced by the automatic routine were compared with the manually reviewed video-recordings. The new method was applied in the routine navigated rTMS speech mapping for 12 consecutive patients during preoperative workup for epilepsy or tumor surgery. The automatic routine correctly detected 96% of the voice onsets, resulting in 96% sensitivity and 71% specificity. Majority (63%) of the misdetections were related to visible throat movements, extra voices before the response, or delayed naming of the previous stimuli. The no-response errors were correctly detected in 88% of events. The proposed setup for automatic detection of voice onsets provides quantitative additional data for analysis of the rTMS-induced speech response modifications. The objectively defined speech response latencies increase the repeatability, reliability and stratification of the rTMS results. Copyright © 2015 Elsevier B.V. All rights reserved.
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhakar, Ramachandran; Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi; Department of Radiology, All India Institute of Medical Sciences, New Delhi
Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL),more » contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.« less
Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-06-01
The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
SU-E-J-34: Setup Accuracy in Spine SBRT Using CBCT 6D Image Guidance in Comparison with 6D ExacTrac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Z; Yip, S; Lewis, J
2015-06-15
Purpose Volumetric information of the spine captured on CBCT can potentially improve the accuracy in spine SBRT setup that has been commonly performed through 2D radiographs. This work evaluates the setup accuracy in spine SBRT using 6D CBCT image guidance that recently became available on Varian systems. Methods ExacTrac radiographs have been commonly used for Spine SBRT setup. The setup process involves first positioning patients with lasers followed by localization imaging, registration, and repositioning. Verification images are then taken providing the residual errors (ExacTracRE) before beam on. CBCT verification is also acquired in our institute. The availability of both ExacTracmore » and CBCT verifications allows a comparison study. 41 verification CBCT of 16 patients were retrospectively registered with the planning CT enabling 6D corrections, giving CBCT residual errors (CBCTRE) which were compared with ExacTracRE. Results The RMS discrepancies between CBCTRE and ExacTracRE are 1.70mm, 1.66mm, 1.56mm in vertical, longitudinal and lateral directions and 0.27°, 0.49°, 0.35° in yaw, roll and pitch respectively. The corresponding mean discrepancies (and standard deviation) are 0.62mm (1.60mm), 0.00mm (1.68mm), −0.80mm (1.36mm) and 0.05° (0.58°), 0.11° (0.48°), −0.16° (0.32°). Of the 41 CBCT, 17 had high-Z surgical implants. No significant difference in ExacTrac-to-CBCT discrepancy was observed between patients with and without the implants. Conclusion Multiple factors can contribute to the discrepancies between CBCT and ExacTrac: 1) the imaging iso-centers of the two systems, while calibrated to coincide, can be different; 2) the ROI used for registration can be different especially if ribs were included in ExacTrac images; 3) small patient motion can occur between the two verification image acquisitions; 4) the algorithms can be different between CBCT (volumetric) and ExacTrac (radiographic) registrations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, B; Maquilan, G; Anders, M
Purpose: Full face and neck thermoplastic masks provide standard-of-care immobilization for patients receiving H&N IMRT. However, these masks are uncomfortable and increase skin dose. The purpose of this pilot study was to investigate the feasibility and setup accuracy of open face and neck mask immobilization with OIG. Methods: Ten patients were consented and enrolled to this IRB-approved protocol. Patients were immobilized with open masks securing only forehead and chin. Standard IMRT to 60–70 Gy in 30 fractions were delivered in all cases. Patient simulation information, including isocenter location and CT skin contours, were imported to a commercial OIG system. Onmore » the first day of treatment, patients were initially set up to surface markings and then OIG referenced to face and neck skin regions of interest (ROI) localized on simulation CT images, followed by in-room CBCT. CBCTs were acquired at least weekly while planar OBI was acquired on the days without CBCT. Following 6D robotic couch correction with kV imaging, a new optical real-time surface image was acquired to track intrafraction motion and to serve as a reference surface for setup at the next treatment fraction. Therapists manually recorded total treatment time as well as couch shifts based on kV imaging. Intrafractional ROI motion tracking was automatically recorded. Results: Setup accuracy of OIG was compared with CBCT results. The setup error based on OIG was represented as a 6D shift (vertical/longitudinal/lateral/rotation/pitch/roll). Mean error values were −0.70±3.04mm, −0.69±2.77mm, 0.33±2.67 mm, −0.14±0.94 o, −0.15±1.10o and 0.12±0.82o, respectively for the cohort. Average treatment time was 24.1±9.2 minutes, comparable to standard immobilization. The amplitude of intrafractional ROI motion was 0.69±0.36 mm, driven primarily by respiratory neck motion. Conclusion: OGI can potentially provide accurate setup and treatment tracking for open face and neck immobilization. Study accrual and patient/provider satisfaction survey collection remain ongoing. This study is supported by VisionRT, Ltd.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, G; Qin, A; Zhang, J
Purpose: With the implementation of Cone-beam Computed-Tomography (CBCT) in proton treatment, we introduces a quick and effective tool to verify the patient’s daily setup and geometry changes based on the Water-Equivalent-Thickness Projection-Image(WETPI) from individual beam angle. Methods: A bilateral head neck cancer(HNC) patient previously treated via VMAT was used in this study. The patient received 35 daily CBCT during the whole treatment and there is no significant weight change. The CT numbers of daily CBCTs were corrected by mapping the CT numbers from simulation CT via Deformable Image Registration(DIR). IMPT plan was generated using 4-field IMPT robust optimization (3.5% rangemore » and 3mm setup uncertainties) with beam angle 60, 135, 300, 225 degree. WETPI within CTV through all beam directions were calculated. 3%/3mm gamma index(GI) were used to provide a quantitative comparison between initial sim-CT and mapped daily CBCT. To simulate an extreme case where human error is involved, a couch bar was manually inserted in front of beam angle 225 degree of one CBCT. WETPI was compared in this scenario. Results: The average of GI passing rate of this patient from different beam angles throughout the treatment course is 91.5 ± 8.6. In the cases with low passing rate, it was found that the difference between shoulder and neck angle as well as the head rest often causes major deviation. This indicates that the most challenge in treating HNC is the setup around neck area. In the extreme case where a couch bar is accidently inserted in the beam line, GI passing rate drops to 52 from 95. Conclusion: WETPI and quantitative gamma analysis give clinicians, therapists and physicists a quick feedback of the patient’s setup accuracy or geometry changes. The tool could effectively avoid some human errors. Furthermore, this tool could be used potentially as an initial signal to trigger plan adaptation.« less
SU-E-J-245: Is Off-Line Adaptive Radiotherapy Sufficient for Head and Neck Cancer with IGRT?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Z; Cleveland Clinic, Cleveland, OH; Shang, Q
2014-06-01
Purpose: Radiation doses delivered to patients with head and neck cancer (HN) may deviate from the planned doses because of variations in patient setup and anatomy. This study was to evaluate whether off-line Adaptive Radiotherapy (ART) is sufficient. Methods: Ten HN patients, who received IMRT under daily imaging guidance using CT-on-rail/KV-CBCT, were randomly selected for this study. For each patient, the daily treatment setup was corrected with translational only directions. Sixty weekly verification CTs were retrospectively analyzed. On these weekly verification CTs, the tumor volumes and OAR contours were manually delineated by a physician. With the treatment iso-center placed onmore » the verification CTs, according to the recorded clinical shifts, the treatment beams from the original IMRT plans were then applied to these CTs to calculate the delivered doses. The electron density of the planning CTs and weekly CTs were overridden to 1 g/cm3. Results: Among 60 fractions, D99 of the CTVs in 4 fractions decreased more than 5% of the planned doses. The maximum dose of the spinal cord exceeded 10% of the planned values in 2 fractions. A close examination indicated that the dose discrepancy in these 6 fractions was due to patient rotations, especially shoulder rotations. After registering these 6 CTs with the planning CT allowing six degree of freedoms, the maximum rotations around 3 axes were > 1.5° for these fractions. With rotation setup errors removed, 4 out of 10 patients still required off-line ART to accommodate anatomical changes. Conclusion: A significant shoulder rotations were observed in 10% fractions, requiring patient re-setup. Off-line ART alone is not sufficient to correct for random variations of patient position, although ART is effective to adapt to patients' gradual anatomic changes. Re-setup or on-line ART may be considered for patients with large deviations detected early by daily IGRT images. The study is supported in part by Siemens Medical Solutions.« less
NASA Astrophysics Data System (ADS)
Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.
2017-12-01
The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.
High-resolution interferometic microscope for traceable dimensional nanometrology in Brazil
NASA Astrophysics Data System (ADS)
Malinovski, I.; França, R. S.; Lima, M. S.; Bessa, M. S.; Silva, C. R.; Couceiro, I. B.
2016-07-01
The double color interferometric microscope is developed for step height standards nanometrology traceable to meter definition via primary wavelength laser standards. The setup is based on two stabilized lasers to provide traceable measurements of highest possible resolution down to the physical limits of the optical instruments in sub-nanometer to micrometer range of the heights. The wavelength reference is He-Ne 633 nm stabilized laser, the secondary source is Blue-Green 488 nm grating laser diode. Accurate fringe portion is measured by modulated phase-shift technique combined with imaging interferometry and Fourier processing. Self calibrating methods are developed to correct systematic interferometric errors.
Ergonomics in the operating room: protecting the surgeon.
Rosenblatt, Peter L; McKinney, Jessica; Adams, Sonia R
2013-01-01
To review elements of an ergonomic operating room environment and describe common ergonomic errors in surgeon posture during laparoscopic and robotic surgery. Descriptive video based on clinical experience and a review of the literature (Canadian Task Force classification III). Community teaching hospital affiliated with a major teaching hospital. Gynecologic surgeons. Demonstration of surgical ergonomic principles and common errors in surgical ergonomics by a physical therapist and surgeon. The physical nature of surgery necessitates awareness of ergonomic principles. The literature has identified ergonomic awareness to be grossly lacking among practicing surgeons, and video has not been documented as a teaching tool for this population. Taking this into account, we created a video that demonstrates proper positioning of monitors and equipment, and incorrect and correct ergonomic positions during surgery. Also presented are 3 common ergonomic errors in surgeon posture: forward head position, improper shoulder elevation, and pelvic girdle asymmetry. Postural reset and motion strategies are demonstrated to help the surgeon learn techniques to counterbalance the sustained and awkward positions common during surgery that lead to muscle fatigue, pain, and degenerative changes. Correct ergonomics is a learned and practiced behavior. We believe that video is a useful way to facilitate improvement in ergonomic behaviors. We suggest that consideration of operating room setup, proper posture, and practice of postural resets are necessary components for a longer, healthier, and pain-free surgical career. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.
A Quantum Non-Demolition Parity measurement in a mixed-species trapped-ion quantum processor
NASA Astrophysics Data System (ADS)
Marinelli, Matteo; Negnevitsky, Vlad; Lo, Hsiang-Yu; Flühmann, Christa; Mehta, Karan; Home, Jonathan
2017-04-01
Quantum non-demolition measurements of multi-qubit systems are an important tool in quantum information processing, in particular for syndrome extraction in quantum error correction. We have recently demonstrated a protocol for quantum non-demolition measurement of the parity of two beryllium ions by detection of a co-trapped calcium ion. The measurement requires a sequence of quantum gates between the three ions, using mixed-species gates between beryllium hyperfine qubits and a calcium optical qubit. Our work takes place in a multi-zone segmented trap setup in which we have demonstrated high fidelity control of both species and multi-well ion shuttling. The advantage of using two species of ion is that we can individually manipulate and read out the state of each ion species without disturbing the internal state of the other. The methods demonstrated here can be used for quantum error correcting codes as well as quantum metrology and are key ingredients for realizing a hybrid universal quantum computer based on trapped ions. Mixed-species control may also enable the investigation of new avenues in quantum simulation and quantum state control. left the group and working in a company now.
Improved remote gaze estimation using corneal reflection-adaptive geometric transforms
NASA Astrophysics Data System (ADS)
Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea
2014-05-01
Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.
Irradiation setup at the U-120M cyclotron facility
NASA Astrophysics Data System (ADS)
Křížek, F.; Ferencei, J.; Matlocha, T.; Pospíšil, J.; Príbeli, P.; Raskina, V.; Isakov, A.; Štursa, J.; Vaňát, T.; Vysoká, K.
2018-06-01
This paper describes parameters of the proton beams provided by the U-120M cyclotron and the related irradiation setup at the open access irradiation facility at the Nuclear Physics Institute of the Czech Academy of Sciences. The facility is suitable for testing radiation hardness of various electronic components. The use of the setup is illustrated by a measurement of an error rate for errors caused by Single Event Transients in an SRAM-based Xilinx XC3S200 FPGA. This measurement provides an estimate of a possible occurrence of Single Event Transients. Data suggest that the variation of error rate of the Single Event Effects for different clock phase shifts is not significant enough to use clock phase alignment with the beam as a fault mitigation technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Elizabeth S.; Prosnitz, Robert G.; Yu Xiaoli
2006-11-15
Purpose: The aim of this study was to assess the impact of patient-specific factors, left ventricle (LV) volume, and treatment set-up errors on the rate of perfusion defects 6 to 60 months post-radiation therapy (RT) in patients receiving tangential RT for left-sided breast cancer. Methods and Materials: Between 1998 and 2005, a total of 153 patients were enrolled onto an institutional review board-approved prospective study and had pre- and serial post-RT (6-60 months) cardiac perfusion scans to assess for perfusion defects. Of the patients, 108 had normal pre-RT perfusion scans and available follow-up data. The impact of patient-specific factors onmore » the rate of perfusion defects was assessed at various time points using univariate and multivariate analysis. The impact of set-up errors on the rate of perfusion defects was also analyzed using a one-tailed Fisher's Exact test. Results: Consistent with our prior results, the volume of LV in the RT field was the most significant predictor of perfusion defects on both univariate (p = 0.0005 to 0.0058) and multivariate analysis (p = 0.0026 to 0.0029). Body mass index (BMI) was the only significant patient-specific factor on both univariate (p = 0.0005 to 0.022) and multivariate analysis (p = 0.0091 to 0.05). In patients with very small volumes of LV in the planned RT fields, the rate of perfusion defects was significantly higher when the fields set-up 'too deep' (83% vs. 30%, p = 0.059). The frequency of deep set-up errors was significantly higher among patients with BMI {>=}25 kg/m{sup 2} compared with patients of normal weight (47% vs. 28%, p = 0.068). Conclusions: BMI {>=}25 kg/m{sup 2} may be a significant risk factor for cardiac toxicity after RT for left-sided breast cancer, possibly because of more frequent deep set-up errors resulting in the inclusion of additional heart in the RT fields. Further study is necessary to better understand the impact of patient-specific factors and set-up errors on the development of RT-induced perfusion defects.« less
Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-01-01
Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients. PMID:25882689
Wu, Jian; Murphy, Martin J
2010-06-01
To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.
Torralba, Marta; Díaz-Pérez, Lucía C.
2017-01-01
This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velec, Michael; Waldron, John N.; O'Sullivan, Brian
2010-03-01
Purpose: To prospectively compare setup error in standard thermoplastic masks and skin-sparing masks (SSMs) modified with low neck cutouts for head-and-neck intensity-modulated radiation therapy (IMRT) patients. Methods and Materials: Twenty head-and-neck IMRT patients were randomized to be treated in a standard mask (SM) or SSM. Cone-beam computed tomography (CBCT) scans, acquired daily after both initial setup and any repositioning, were used for initial and residual interfraction evaluation, respectively. Weekly, post-IMRT CBCT scans were acquired for intrafraction setup evaluation. The population random (sigma) and systematic (SIGMA) errors were compared for SMs and SSMs. Skin toxicity was recorded weekly by use ofmore » Radiation Therapy Oncology Group criteria. Results: We evaluated 762 CBCT scans in 11 patients randomized to the SM and 9 to the SSM. Initial interfraction sigma was 1.6 mm or less or 1.1 deg. or less for SM and 2.0 mm or less and 0.8 deg. for SSM. Initial interfraction SIGMA was 1.0 mm or less or 1.4 deg. or less for SM and 1.1 mm or less or 0.9 deg. or less for SSM. These errors were reduced before IMRT with CBCT image guidance with no significant differences in residual interfraction or intrafraction uncertainties between SMs and SSMs. Intrafraction sigma and SIGMA were less than 1 mm and less than 1 deg. for both masks. Less severe skin reactions were observed in the cutout regions of the SSM compared with non-cutout regions. Conclusions: Interfraction and intrafraction setup error is not significantly different for SSMs and conventional masks in head-and-neck radiation therapy. Mask cutouts should be considered for these patients in an effort to reduce skin toxicity.« less
Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi
2012-09-01
Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Dynamic response tests of inertial and optical wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.; Burner, A. W.; Tripp, J. S.; Tcheng, P.; Finley, T. D.; Popernack, T. G., Jr.
1995-01-01
Results are presented for an experimental study of the response of inertial and optical wind-tunnel model attitude measurement systems in a wind-off simulated dynamic environment. This study is part of an ongoing activity at the NASA Langley Research Center to develop high accuracy, advanced model attitude measurement systems that can be used in a dynamic wind-tunnel environment. This activity was prompted by the inertial model attitude sensor response observed during high levels of model vibration which results in a model attitude measurement bias error. Significant bias errors in model attitude measurement were found for the measurement using the inertial device during wind-off dynamic testing of a model system. The amount of bias present during wind-tunnel tests will depend on the amplitudes of the model dynamic response and the modal characteristics of the model system. Correction models are presented that predict the vibration-induced bias errors to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment. The optical system results were uncorrupted by model vibration in the laboratory setup.
Defining robustness protocols: a method to include and evaluate robustness in clinical plans
NASA Astrophysics Data System (ADS)
McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.
2015-04-01
We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
Jani, Shyam S; Low, Daniel A; Lamb, James M
2015-01-01
To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, S; Robinson, A; Kiess, A
2015-06-15
Purpose: The purpose of this study is to develop an accurate and effective technique to predict and monitor volume changes of the tumor and organs at risk (OARs) from daily cone-beam CTs (CBCTs). Methods: While CBCT is typically used to minimize the patient setup error, its poor image quality impedes accurate monitoring of daily anatomical changes in radiotherapy. Reconstruction artifacts in CBCT often cause undesirable errors in registration-based contour propagation from the planning CT, a conventional way to estimate anatomical changes. To improve the registration and segmentation accuracy, we developed a new deformable image registration (DIR) that iteratively corrects CBCTmore » intensities using slice-based histogram matching during the registration process. Three popular DIR algorithms (hierarchical B-spline, demons, optical flow) augmented by the intensity correction were implemented on a graphics processing unit for efficient computation, and their performances were evaluated on six head and neck (HN) cancer cases. Four trained scientists manually contoured nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs for each case, to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial software, VelocityAI (Varian Medical Systems Inc.). Results: Manual contouring showed significant variations, [-76, +141]% from the mean of all four sets of contours. The volume differences (mean±std in cc) between the average manual segmentation and four automatic segmentations are 3.70±2.30(B-spline), 1.25±1.78(demons), 0.93±1.14(optical flow), and 4.39±3.86 (VelocityAI). In comparison to the average volume of the manual segmentations, the proposed approach significantly reduced the estimation error by 9%(B-spline), 38%(demons), and 51%(optical flow) over the conventional mutual information based method (VelocityAI). Conclusion: The proposed CT-CBCT registration with local CBCT intensity correction can accurately predict the tumor volume change with reduced errors. Although demonstrated only on HN nodal GTVs, the results imply improved accuracy for other critical structures. This work was supported by NIH/NCI under grant R42CA137886.« less
A Robust and Affordable Table Indexing Approach for Multi-isocenter Dosimetrically Matched Fields.
Yu, Amy; Fahimian, Benjamin; Million, Lynn; Hsu, Annie
2017-05-23
Purpose Radiotherapy treatment planning of extended volume typically necessitates the utilization of multiple field isocenters and abutting dosimetrically matched fields in order to enable coverage beyond the field size limits. A common example includes total lymphoid irradiation (TLI) treatments, which are conventionally planned using dosimetric matching of the mantle, para-aortic/spleen, and pelvic fields. Due to the large irradiated volume and system limitations, such as field size and couch extension, a combination of couch shifts and sliding of patients are necessary to be correctly executed for accurate delivery of the plan. However, shifting of patients presents a substantial safety issue and has been shown to be prone to errors ranging from minor deviations to geometrical misses warranting a medical event. To address this complex setup and mitigate the safety issues relating to delivery, a practical technique for couch indexing of TLI treatments has been developed and evaluated through a retrospective analysis of couch position. Methods The indexing technique is based on the modification of the commonly available slide board to enable indexing of the patient position. Modifications include notching to enable coupling with indexing bars, and the addition of a headrest used to fixate the head of the patient relative to the slide board. For the clinical setup, a Varian Exact Couch TM (Varian Medical Systems, Inc, Palo Alto, CA) was utilized. Two groups of patients were treated: 20 patients with table indexing and 10 patients without. The standard deviations (SDs) of the couch positions in longitudinal, lateral, and vertical directions through the entire treatment cycle for each patient were calculated and differences in both groups were analyzed with Student's t-test. Results The longitudinal direction showed the largest improvement. In the non-indexed group, the positioning SD ranged from 2.0 to 7.9 cm. With the indexing device, the positioning SD was reduced to a range of 0.4 to 1.3 cm (p < 0.05 with 95% confidence level). The lateral positioning was slightly improved (p < 0.05 with 95% confidence level), while no improvement was observed in the vertical direction. Conclusions The conventional matched field TLI treatment is error-prone to geometrical setup error. The feasibility of full indexing TLI treatments was validated and shown to result in a significant reduction of positioning and shifting errors.
NASA Astrophysics Data System (ADS)
Lozano, A. I.; Oller, J. C.; Krupa, K.; Ferreira da Silva, F.; Limão-Vieira, P.; Blanco, F.; Muñoz, A.; Colmenares, R.; García, G.
2018-06-01
A novel experimental setup has been implemented to provide accurate electron scattering cross sections from molecules at low and intermediate impact energies (1-300 eV) by measuring the attenuation of a magnetically confined linear electron beam from a molecular target. High-resolution electron energy is achieved through confinement in a magnetic gas trap where electrons are cooled by successive collisions with N2. Additionally, we developed and present a method to correct systematic errors arising from energy and angular resolution limitations. The accuracy of the entire measurement procedure is validated by comparing the N2 total scattering cross section in the considered energy range with benchmark values available in the literature.
A tilt and roll device for automated correction of rotational setup errors.
Hornick, D C; Litzenberg, D W; Lam, K L; Balter, J M; Hetrick, J; Ten Haken, R K
1998-09-01
A tilt and roll device has been developed to add two additional degrees of freedom to an existing treatment table. This device allows computer-controlled rotational motion about the inferior-superior and left-right patient axes. The tilt and roll device comprises three supports between the tabletop and base. An automotive type universal joint welded to the end of a steel pipe supports the center of the table. Two computer-controlled linear electric actuators utilizing high accuracy stepping motors support the foot of table and control the tilt and roll of the tabletop. The current system meets or exceeds all pre-design specifications for precision, weight capacity, rigidity, and range of motion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teboh, Forbang R; Agee, M; Rowe, L
2014-06-01
Purpose: Immobilization devices combine rigid patient fixation as well as comfort and play a key role providing the stability required for accurate radiation delivery. In the setup step, couch re-positioning needed to align the patient is derived via registration of acquired versus reference image. For subsequent fractions, replicating the initial setup should yield identical alignment errors when compared to the reference. This is not always the case and further couch re-positioning can be needed. An important quality assurance measure is to set couch tolerances beyond which additional investigations are needed. The purpose of this work was to study the inter-fractionmore » couch changes needed to re-align the patient and the intra-fraction stability of the alignment as a guide to establish the couch tolerances. Methods: Data from twelve patients treated on the Accuray CyberKnife (CK) system for fractionated intracranial radiotherapy and immobilized with Aquaplast RT, U-frame, F-Head-Support (Qfix, PA, USA) was used. Each fraction involved image acquisitions and registration with the reference to re-align the patient. The absolute couch position corresponding to the approved setup alignment was recorded per fraction. Intra-fraction set-up corrections were recorded throughout the treatment. Results: The average approved setup alignment was 0.03±0.28mm, 0.15±0.22mm, 0.06±0.31mm in the L/R, A/P, S/I directions respectively and 0.00±0.35degrees, 0.03±0.32degrees, 0.08±0.45degrees for roll, pitch and yaw respectively. The inter-fraction reproducibility of the couch position was 6.65mm, 10.55mm, and 4.77mm in the L/R, A/P and S/I directions respectively and 0.82degrees, 0.71degrees for roll and pitch respectively. Intra-fraction monitoring showed small average errors of 0.21±0.21mm, 0.00±0.08mm, 0.23±0.22mm in the L/R, A/P, S/I directions respectively and 0.03±0.12degrees, 0.04±0.25degrees, and 0.13±0.15degrees in the roll, pitch and yaw respectively. Conclusion: The inter-fraction reproducibility should serve as a guide to couch tolerances, specific to a site and immobilization. More patients need to be included to make general conclusions.« less
Kentgens, Anne-Christianne; Guidi, Marisa; Korten, Insa; Kohler, Lena; Binggeli, Severin; Singer, Florian; Latzin, Philipp; Anagnostopoulou, Pinelopi
2018-05-01
Multiple breath washout (MBW) is a sensitive test to measure lung volumes and ventilation inhomogeneity from infancy on. The commonly used setup for infant MBW, based on ultrasonic flowmeter, requires extensive signal processing, which may reduce robustness. A new setup may overcome some previous limitations but formal validation is lacking. We assessed the feasibility of infant MBW testing with the new setup and compared functional residual capacity (FRC) values of the old and the new setup in vivo and in vitro. We performed MBW in four healthy infants and four infants with cystic fibrosis, as well as in a Plexiglas lung simulator using realistic lung volumes and breathing patterns, with the new (Exhalyzer D, Spiroware 3.2.0, Ecomedics) and the old setup (Exhalyzer D, WBreath 3.18.0, ndd) in random sequence. The technical feasibility of MBW with the new device-setup was 100%. Intra-subject variability in FRC was low in both setups, but differences in FRC between the setups were considerable (mean relative difference 39.7%, range 18.9; 65.7, P = 0.008). Corrections of software settings decreased FRC differences (14.0%, -6.4; 42.3, P = 0.08). Results were confirmed in vitro. MBW measurements with the new setup were feasible in infants. However, despite attempts to correct software settings, outcomes between setups were not interchangeable. Further work is needed before widespread application of the new setup can be recommended. © 2018 Wiley Periodicals, Inc.
Kalman filter based control for Adaptive Optics
NASA Astrophysics Data System (ADS)
Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry
2004-12-01
Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.
Prevention of gross setup errors in radiotherapy with an efficient automatic patient safety system.
Yan, Guanghua; Mittauer, Kathryn; Huang, Yin; Lu, Bo; Liu, Chihray; Li, Jonathan G
2013-11-04
Treatment of the wrong body part due to incorrect setup is among the leading types of errors in radiotherapy. The purpose of this paper is to report an efficient automatic patient safety system (PSS) to prevent gross setup errors. The system consists of a pair of charge-coupled device (CCD) cameras mounted in treatment room, a single infrared reflective marker (IRRM) affixed on patient or immobilization device, and a set of in-house developed software. Patients are CT scanned with a CT BB placed over their surface close to intended treatment site. Coordinates of the CT BB relative to treatment isocenter are used as reference for tracking. The CT BB is replaced with an IRRM before treatment starts. PSS evaluates setup accuracy by comparing real-time IRRM position with reference position. To automate system workflow, PSS synchronizes with the record-and-verify (R&V) system in real time and automatically loads in reference data for patient under treatment. Special IRRMs, which can permanently stick to patient face mask or body mold throughout the course of treatment, were designed to minimize therapist's workload. Accuracy of the system was examined on an anthropomorphic phantom with a designed end-to-end test. Its performance was also evaluated on head and neck as well as abdominalpelvic patients using cone-beam CT (CBCT) as standard. The PSS system achieved a seamless clinic workflow by synchronizing with the R&V system. By permanently mounting specially designed IRRMs on patient immobilization devices, therapist intervention is eliminated or minimized. Overall results showed that the PSS system has sufficient accuracy to catch gross setup errors greater than 1 cm in real time. An efficient automatic PSS with sufficient accuracy has been developed to prevent gross setup errors in radiotherapy. The system can be applied to all treatment sites for independent positioning verification. It can be an ideal complement to complex image-guidance systems due to its advantages of continuous tracking ability, no radiation dose, and fully automated clinic workflow.
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
NASA Astrophysics Data System (ADS)
Siemons, M.; Hulleman, C. N.; Thorsen, R. Ø.; Smith, C. S.; Stallinga, S.
2018-04-01
Point Spread Function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can be used in the corresponding localization algorithms in order to model the intricate spot shape and deformations correctly. The complexity of the optical architecture and fit model makes PSF engineering approaches particularly sensitive to optical aberrations. Here, we present a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs. We achieve high-precision wavefront control, to a level below 20 m$\\lambda$ wavefront aberration over a 30 minute time window after the calibration procedure, using a separate light path for calibrating the pixel-to-pixel variations of the SLM, and alignment of the SLM with respect to the optical axis and Fourier plane within 3 $\\mu$m ($x/y$) and 100 $\\mu$m ($z$) error. Aberrations are retrieved from a fit of the vectorial PSF model to a bead $z$-stack and compensated with a residual wavefront error comparable to the error of the SLM calibration step. This well-calibrated and corrected setup makes it possible to create complex `3D+$\\lambda$' PSFs that fit very well to the vectorial PSF model. Proof-of-principle bead experiments show precisions below 10~nm in $x$, $y$, and $\\lambda$, and below 20~nm in $z$ over an axial range of 1 $\\mu$m with 2000 signal photons and 12 background photons.
Secret Sharing of a Quantum State.
Lu, He; Zhang, Zhen; Chen, Luo-Kan; Li, Zheng-Da; Liu, Chang; Li, Li; Liu, Nai-Le; Ma, Xiongfeng; Chen, Yu-Ao; Pan, Jian-Wei
2016-07-15
Secret sharing of a quantum state, or quantum secret sharing, in which a dealer wants to share a certain amount of quantum information with a few players, has wide applications in quantum information. The critical criterion in a threshold secret sharing scheme is confidentiality: with less than the designated number of players, no information can be recovered. Furthermore, in a quantum scenario, one additional critical criterion exists: the capability of sharing entangled and unknown quantum information. Here, by employing a six-photon entangled state, we demonstrate a quantum threshold scheme, where the shared quantum secrecy can be efficiently reconstructed with a state fidelity as high as 93%. By observing that any one or two parties cannot recover the secrecy, we show that our scheme meets the confidentiality criterion. Meanwhile, we also demonstrate that entangled quantum information can be shared and recovered via our setting, which shows that our implemented scheme is fully quantum. Moreover, our experimental setup can be treated as a decoding circuit of the five-qubit quantum error-correcting code with two erasure errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Winnie, E-mail: winnie.li@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Purdie, Thomas G.
2011-12-01
Purpose: To assess intrafractional geometric accuracy of lung stereotactic body radiation therapy (SBRT) patients treated with volumetric image guidance. Methods and Materials: Treatment setup accuracy was analyzed in 133 SBRT patients treated via research ethics board-approved protocols. For each fraction, a localization cone-beam computed tomography (CBCT) scan was acquired for soft-tissue registration to the internal target volume, followed by a couch adjustment for positional discrepancies greater than 3 mm, verified with a second CBCT scan. CBCT scans were also performed at intrafraction and end fraction. Patient positioning data from 2047 CBCT scans were recorded to determine systematic ({Sigma}) and randommore » ({sigma}) uncertainties, as well as planning target volume margins. Data were further stratified and analyzed by immobilization method (evacuated cushion [n = 75], evacuated cushion plus abdominal compression [n = 33], or chest board [n = 25]) and by patients' Eastern Cooperative Oncology Group performance status (PS): 0 (n = 31), 1 (n = 70), or 2 (n = 32). Results: Using CBCT internal target volume was matched within {+-}3 mm in 16% of all fractions at localization, 89% at verification, 72% during treatment, and 69% after treatment. Planning target volume margins required to encompass residual setup errors after couch corrections (verification CBCT scans) were 4 mm, and they increased to 5 mm with target intrafraction motion (post-treatment CBCT scans). Small differences (<1 mm) in the cranial-caudal direction of target position were observed between the immobilization cohorts in the localization, verification, intrafraction, and post-treatment CBCT scans (p < 0.01). Positional drift varied according to patient PS, with the PS 1 and 2 cohorts drifting out of position by mid treatment more than the PS 0 cohort in the cranial-caudal direction (p = 0.04). Conclusions: Image guidance ensures high geometric accuracy for lung SBRT irrespective of immobilization method or PS. A 5-mm setup margin suffices to address intrafraction motion. This setup margin may be further reduced by strategies such as frequent image guidance or volumetric arc therapy to correct or limit intrafraction motion.« less
Li, Winnie; Purdie, Thomas G; Taremi, Mojgan; Fung, Sharon; Brade, Anthony; Cho, B C John; Hope, Andrew; Sun, Alexander; Jaffray, David A; Bezjak, Andrea; Bissonnette, Jean-Pierre
2011-12-01
To assess intrafractional geometric accuracy of lung stereotactic body radiation therapy (SBRT) patients treated with volumetric image guidance. Treatment setup accuracy was analyzed in 133 SBRT patients treated via research ethics board-approved protocols. For each fraction, a localization cone-beam computed tomography (CBCT) scan was acquired for soft-tissue registration to the internal target volume, followed by a couch adjustment for positional discrepancies greater than 3 mm, verified with a second CBCT scan. CBCT scans were also performed at intrafraction and end fraction. Patient positioning data from 2047 CBCT scans were recorded to determine systematic (Σ) and random (σ) uncertainties, as well as planning target volume margins. Data were further stratified and analyzed by immobilization method (evacuated cushion [n=75], evacuated cushion plus abdominal compression [n=33], or chest board [n=25]) and by patients' Eastern Cooperative Oncology Group performance status (PS): 0 (n=31), 1 (n=70), or 2 (n=32). Using CBCT internal target volume was matched within ±3 mm in 16% of all fractions at localization, 89% at verification, 72% during treatment, and 69% after treatment. Planning target volume margins required to encompass residual setup errors after couch corrections (verification CBCT scans) were 4 mm, and they increased to 5 mm with target intrafraction motion (post-treatment CBCT scans). Small differences (<1 mm) in the cranial-caudal direction of target position were observed between the immobilization cohorts in the localization, verification, intrafraction, and post-treatment CBCT scans (p<0.01). Positional drift varied according to patient PS, with the PS 1 and 2 cohorts drifting out of position by mid treatment more than the PS 0 cohort in the cranial-caudal direction (p=0.04). Image guidance ensures high geometric accuracy for lung SBRT irrespective of immobilization method or PS. A 5-mm setup margin suffices to address intrafraction motion. This setup margin may be further reduced by strategies such as frequent image guidance or volumetric arc therapy to correct or limit intrafraction motion. Copyright © 2011 Elsevier Inc. All rights reserved.
SU-F-T-642: Sub Millimeter Accurate Setup of More Than Three Vertebrae in Spinal SBRT with 6D Couch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, X; Zhao, Z; Yang, J
Purpose: To assess the initial setup accuracy in treating more than 3 vertebral body levels in spinal SBRT using a 6D couch. Methods: We retrospectively analyzed last 20 spinal SBRT patients (4 cervical, 9 thoracic, 7 lumbar/sacrum) treated in our clinic. These patients in customized immobilization device were treated in 1 or 3 fractions. Initial setup used ExacTrac and Brainlab 6D couch to align target within 1 mm and 1 degree, following by a cone beam CT (CBCT) for verification. Our current standard practice allows treating a maximum of three continuous vertebrae. Here we assess the possibility to achieve submore » millimeter setup accuracy for more than three vertebrae by examining the residual error in every slice of CBCT. The CBCT had a range of 17.5 cm, which covered 5 to 9 continuous vertebrae depending on the patient and target location. In the study, CBCT from the 1st fraction treatment was rigidly registered with the planning CT in Pinnacle. The residual setup error of a vertebra was determined by expanding the vertebra contour on the planning CT to be large enough to enclose the corresponding vertebra on CBCT. The margin of the expansion was considered as setup error. Results: Out of the 20 patients analyzed, initial setup accuracy can be achieved within 1 mm for a span of 5 or more vertebrae starting from T2 vertebra to inferior vertebra levels. 2 cervical and 2 upper thoracic patients showed the cervical spine was difficult to achieve sub millimeter accuracy for multi levels without a customized immobilization headrest. Conclusion: If the curvature of spinal columns can be reproduced in customized immobilization device during treatment as simulation, multiple continuous vertebrae can be setup within 1 mm with the use of a 6D couch.« less
Understanding the difference in cohesive energies between alpha and beta tin in DFT calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Legrain, Fleur; Manzhos, Sergei, E-mail: mpemanzh@nus.edu.sg
2016-04-15
The transition temperature between the low-temperature alpha phase of tin to beta tin is close to the room temperature (T{sub αβ} = 13{sup 0}C), and the difference in cohesive energy of the two phases at 0 K of about ΔE{sub coh} =0.02 eV/atom is at the limit of the accuracy of DFT (density functional theory) with available exchange-correlation functionals. It is however critically important to model the relative phase energies correctly for any reasonable description of phenomena and technologies involving these phases, for example, the performance of tin electrodes in electrochemical batteries. Here, we show that several commonly used andmore » converged DFT setups using the most practical and widely used PBE functional result in ΔE{sub coh} ≈0.04 eV/atom, with different types of basis sets and with different models of core electrons (all-electron or pseudopotentials of different types), which leads to a significant overestimation of T{sub αβ}. We show that this is due to the errors in relative positions of s and p –like bands, which, combined with different populations of these bands in α and β Sn, leads to overstabilization of alpha tin. We show that this error can be effectively corrected by applying a Hubbard +U correction to s –like states, whereby correct cohesive energies of both α and β Sn can be obtained with the same computational scheme. We quantify for the first time the effects of anharmonicity on ΔE{sub coh} and find that it is negligible.« less
Mock, U; Dieckmann, K; Wolff, U; Knocke, T H; Pötter, R
1999-08-01
Geometrical accuracy in patient positioning can vary substantially during external radiotherapy. This study estimated the set-up accuracy during pelvic irradiation for gynecological malignancies for determination of safety margins (planning target volume, PTV). Based on electronic portal imaging devices (EPID), 25 patients undergoing 4-field pelvic irradiation for gynecological malignancies were analyzed with regard to set-up accuracy during the treatment course. Regularly performed EPID images were used in order to systematically assess the systematic and random component of set-up displacements. Anatomical matching of verification and simulation images was followed by measuring corresponding distances between the central axis and anatomical features. Data analysis of set-up errors referred to the x-, y-,and z-axes. Additionally, cumulative frequencies were evaluated. A total of 50 simulation films and 313 verification images were analyzed. For the anterior-posterior (AP) beam direction mean deviations along the x- and z-axes were 1.5 mm and -1.9 mm, respectively. Moreover, random errors of 4.8 mm (x-axis) and 3.0 mm (z-axis) were determined. Concerning the latero-lateral treatment fields, the systematic errors along the two axes were calculated to 2.9 mm (y-axis) and -2.0 mm (z-axis) and random errors of 3.8 mm and 3.5 mm were found, respectively. The cumulative frequency of misalignments < or =5 mm showed values of 75% (AP fields) and 72% (latero-lateral fields). With regard to cumulative frequencies < or =10 mm quantification revealed values of 97% for both beam directions. During external pelvic irradiation therapy for gynecological malignancies, EPID images on a regular basis revealed acceptable set-up inaccuracies. Safety margins (PTV) of 1 cm appear to be sufficient, accounting for more than 95% of all deviations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less
OptiCentric lathe centering machine
NASA Astrophysics Data System (ADS)
Buß, C.; Heinisch, J.
2013-09-01
High precision optics depend on precisely aligned lenses. The shift and tilt of individual lenses as well as the air gap between elements require accuracies in the single micron regime. These accuracies are hard to meet with traditional assembly methods. Instead, lathe centering can be used to machine the mount with respect to the optical axis. Using a diamond turning process, all relevant errors of single mounted lenses can be corrected in one post-machining step. Building on the OptiCentric® and OptiSurf® measurement systems, Trioptics has developed their first lathe centering machines. The machine and specific design elements of the setup will be shown. For example, the machine can be used to turn optics for i-line steppers with highest precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Eric K.; Larson, David A.; Aubin, Michele
Purpose: This report describes a new image-guided radiotherapy (IGRT) technique using megavoltage cone-beam computed tomography (MV-CBCT) to treat paraspinous tumors in the presence of orthopedic hardware. Methods and Materials: A patient with a resected paraspinous high-grade sarcoma was treated to 59.4 Gy with an IMRT plan. Daily MV-CBCT imaging was used to ensure accurate positioning. The displacement between MV-CBCT and planning CT images were determined daily and applied remotely to the treatment couch. The dose-volume histograms of the original and a hypothetical IMRT plan (shifted by the average daily setup errors) were compared to estimate the impact on dosimetry. Results:more » The mean setup corrections in the lateral, longitudinal, and vertical directions were 3.6 mm (95% CI, 2.6-4.6 mm), 4.1 mm (95% CI, 3.2-5.0 mm), and 1.0 mm (95% CI, 0.6-1.3 mm), respectively. Without corrected positioning, the dose to 0.1 cc of the spinal cord increased by 9.4 Gy, and the doses to 95% of clinical target volumes 1 and 2 were reduced by 4 Gy and 4.8 Gy, respectively. Conclusions: Megavoltage-CBCT provides a new alternative image-guided radiotherapy approach for treatment of paraspinous tumors in the presence of orthopedic hardware by providing 3D anatomic information in the treatment position, with clear imaging of metallic objects and without compromising soft-tissue information.« less
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less
Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2018-02-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanabe, S; Utsunomiya, S; Abe, E
Purpose: To assess an accuracy of fiducial maker-based setup using ExacTrac (ExT-based setup) as compared with soft tissue-based setup using Cone-beam CT (CBCT-based setup) for patients with prostate cancer receiving intensity-modulated radiation therapy (IMRT) for the purpose of investigating whether ExT-based setup can be an alternative to CBCT-based setup. Methods: The setup accuracy was analyzed prospectively for 7 prostate cancer patients with implanted three fiducial markers received IMRT. All patients were treated after CBCT-based setup was performed and corresponding shifts were recorded. ExacTrac images were obtained before and after CBCT-based setup. The fiducial marker-based shifts were calculated based on thosemore » two images and recorded on the assumption that the setup correction was carried out by fiducial marker-based auto correction. Mean and standard deviation of absolute differences and the correlation between CBCT and ExT shifts were estimated. Results: A total of 178 image dataset were analyzed. On the differences between CBCT and ExT shifts, 133 (75%) of 178 image dataset resulted in smaller differences than 3 mm in all dimensions. Mean differences in the anterior-posterior (AP), superior-inferior (SI), and left-right (LR) dimensions were 1.8 ± 1.9 mm, 0.7 ± 1.9 mm, and 0.6 ± 0.8 mm, respectively. The percentages of shift agreements within ±3 mm were 76% for AP, 90% for SI, and 100% for LR. The Pearson coefficient of correlation for CBCT and ExT shifts were 0.80 for AP, 0.80 for SI, and 0.65 for LR. Conclusion: This work showed that the accuracy of ExT-based setup was correlated with that of CBCT-based setup, implying that ExT-based setup has a potential ability to be an alternative to CBCT-based setup. The further work is to specify the conditions that ExT-based setup can provide the accuracy comparable to CBCT-based setup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balik, S; Weiss, E; Sleeman, W
Purpose: To evaluate the potential impact of several setup error correction strategies on a proposed image-guided adaptive radiotherapy strategy for locally advanced lung cancer. Methods: Daily 4D cone-beam CT and weekly 4D fan-beam CT images were acquired from 9 lung cancer patients undergoing concurrent chemoradiation therapy. Initial planning CT was deformably registered to daily CBCT images to generate synthetic treatment courses. An adaptive radiation therapy course was simulated using the weekly CT images with replanning twice and a hypofractionated, simultaneous integrated boost to a total dose of 66 Gy to the original PTV and either a 66 Gy (no boost)more » or 82 Gy (boost) dose to the boost PTV (ITV + 3mm) in 33 fractions with IMRT or VMAT. Lymph nodes (LN) were not boosted (prescribed to 66 Gy in both plans). Synthetic images were rigidly, bony (BN) or tumor and carina (TC), registered to the corresponding plan CT, dose was computed on these from adaptive replans (PLAN) and deformably accumulated back to the original planning CT. Cumulative D98% of CTV of PT (ITV for 82Gy) and LN, and normal tissue dose changes were analyzed. Results: Two patients were removed from the study due to large registration errors. For the remaining 7 patients, D98% for CTV-PT (ITV-PT for 82 Gy) and CTV-LN was within 1 Gy of PLAN for both 66 Gy and 82 Gy plans with both setup techniques. Overall, TC based setup provided better results, especially for LN coverage (p = 0.1 for 66Gy plan and p = 0.2 for 82 Gy plan, comparison of BN and TC), though not significant. Normal tissue dose constraints violated for some patients if constraint was barely achieved in PLAN. Conclusion: The hypofractionated adaptive strategy appears to be deliverable with soft tissue alignment for the evaluated margins and planning parameters. Research was supported by NIH P01CA116602.« less
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Efficient gradient calibration based on diffusion MRI.
Teh, Irvin; Maguire, Mahon L; Schneider, Jürgen E
2017-01-01
To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. The errors in apparent diffusion coefficients along orthogonal axes ranged from -9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and -0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from -5.5% to + 4.5% precalibration and were likewise reduced to -0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170-179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2016 Wiley Periodicals, Inc.
Efficient gradient calibration based on diffusion MRI
Teh, Irvin; Maguire, Mahon L.
2016-01-01
Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277
Analysis of quantum error correction with symmetric hypergraph states
NASA Astrophysics Data System (ADS)
Wagner, T.; Kampermann, H.; Bruß, D.
2018-03-01
Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.
Paediatric Refractive Errors in an Eye Clinic in Osogbo, Nigeria.
Michaeline, Isawumi; Sheriff, Agboola; Bimbo, Ayegoro
2016-03-01
Paediatric ophthalmology is an emerging subspecialty in Nigeria and as such there is paucity of data on refractive errors in the country. This study set out to determine the pattern of refractive errors in children attending an eye clinic in South West Nigeria. A descriptive study of 180 consecutive subjects seen over a 2-year period. Presenting complaints, presenting visual acuity (PVA), age and sex were recorded. Clinical examination of the anterior and posterior segments of the eyes, extraocular muscle assessment and refraction were done. The types of refractive errors and their grades were determined. Corrected VA was obtained. Data was analysed using descriptive statistics in proportions, chi square with p value <0.05. The age range of subjects was between 3 and 16 years with mean age = 11.7 and SD = 0.51; with males making up 33.9%.The commonest presenting complaint was blurring of distant vision (40%), presenting visual acuity 6/9 (33.9%), normal vision constituted >75.0%, visual impairment20% and low vision 23.3%. Low grade spherical and cylindrical errors occurred most frequently (35.6% and 59.9% respectively). Regular astigmatism was significantly more common, P <0.001. The commonest diagnosis was simple myopic astigmatism (41.1%). Four cases of strabismus were seen. Simple spherical and cylindrical errors were the commonest types of refractive errors seen. Visual impairment and low vision occurred and could be a cause of absenteeism from school. Low-cost spectacle production or dispensing unit and health education are advocated for the prevention of visual impairment in a hospital set-up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qinghui; Chan, Maria F.; Burman, Chandra
2013-12-15
Purpose: Setting a proper margin is crucial for not only delivering the required radiation dose to a target volume, but also reducing the unnecessary radiation to the adjacent organs at risk. This study investigated the independent one-dimensional symmetric and asymmetric margins between the clinical target volume (CTV) and the planning target volume (PTV) for linac-based single-fraction frameless stereotactic radiosurgery (SRS).Methods: The authors assumed a Dirac delta function for the systematic error of a specific machine and a Gaussian function for the residual setup errors. Margin formulas were then derived in details to arrive at a suitable CTV-to-PTV margin for single-fractionmore » frameless SRS. Such a margin ensured that the CTV would receive the prescribed dose in 95% of the patients. To validate our margin formalism, the authors retrospectively analyzed nine patients who were previously treated with noncoplanar conformal beams. Cone-beam computed tomography (CBCT) was used in the patient setup. The isocenter shifts between the CBCT and linac were measured for a Varian Trilogy linear accelerator for three months. For each plan, the authors shifted the isocenter of the plan in each direction by ±3 mm simultaneously to simulate the worst setup scenario. Subsequently, the asymptotic behavior of the CTV V{sub 80%} for each patient was studied as the setup error approached the CTV-PTV margin.Results: The authors found that the proper margin for single-fraction frameless SRS cases with brain cancer was about 3 mm for the machine investigated in this study. The isocenter shifts between the CBCT and the linac remained almost constant over a period of three months for this specific machine. This confirmed our assumption that the machine systematic error distribution could be approximated as a delta function. This definition is especially relevant to a single-fraction treatment. The prescribed dose coverage for all the patients investigated was 96.1%± 5.5% with an extreme 3-mm setup error in all three directions simultaneously. It was found that the effect of the setup error on dose coverage was tumor location dependent. It mostly affected the tumors located in the posterior part of the brain, resulting in a minimum coverage of approximately 72%. This was entirely due to the unique geometry of the posterior head.Conclusions: Margin expansion formulas were derived for single-fraction frameless SRS such that the CTV would receive the prescribed dose in 95% of the patients treated for brain cancer. The margins defined in this study are machine-specific and account for nonzero mean systematic error. The margin for single-fraction SRS for a group of machines was also derived in this paper.« less
Digital computer technique for setup and checkout of an analog computer
NASA Technical Reports Server (NTRS)
Ambaruch, R.
1968-01-01
Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.
Construction of an unmanned aerial vehicle remote sensing system for crop monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seungtaek; Ko, Jonghan; Kim, Mijeong; Kim, Jongkwon
2016-04-01
We constructed a lightweight unmanned aerial vehicle (UAV) remote sensing system and determined the ideal method for equipment setup, image acquisition, and image processing. Fields of rice paddy (Oryza sativa cv. Unkwang) grown under three different nitrogen (N) treatments of 0, 50, or 115 kg/ha were monitored at Chonnam National University, Gwangju, Republic of Korea, in 2013. A multispectral camera was used to acquire UAV images from the study site. Atmospheric correction of these images was completed using the empirical line method, and three-point (black, gray, and white) calibration boards were used as pseudo references. Evaluation of our corrected UAV-based remote sensing data revealed that correction efficiency and root mean square errors ranged from 0.77 to 0.95 and 0.01 to 0.05, respectively. The time series maps of simulated normalized difference vegetation index (NDVI) produced using the UAV images reproduced field variations of NDVI reasonably well, both within and between the different N treatments. We concluded that the UAV-based remote sensing technology utilized in this study is potentially an easy and simple way to quantitatively obtain reliable two-dimensional remote sensing information on crop growth.
Zhu, Jian; Bai, Tong; Gu, Jiabing; Sun, Ziwen; Wei, Yumei; Li, Baosheng; Yin, Yong
2018-04-27
To evaluate the effect of pretreatment megavoltage computed tomographic (MVCT) scan methodology on setup verification and adaptive dose calculation in helical TomoTherapy. Both anthropomorphic heterogeneous chest and pelvic phantoms were planned with virtual targets by TomoTherapy Physicist Station and were scanned with TomoTherapy megavoltage image-guided radiotherapy (IGRT) system consisted of six groups of options: three different acquisition pitches (APs) of 'fine', 'normal' and 'coarse' were implemented by multiplying 2 different corresponding reconstruction intervals (RIs). In order to mimic patient setup variations, each phantom was shifted 5 mm away manually in three orthogonal directions respectively. The effect of MVCT scan options was analyzed in image quality (CT number and noise), adaptive dose calculation deviations and positional correction variations. MVCT scanning time with pitch of 'fine' was approximately twice of 'normal' and 3 times more than 'coarse' setting, all which will not be affected by different RIs. MVCT with different APs delivered almost identical CT numbers and image noise inside 7 selected regions with various densities. DVH curves from adaptive dose calculation with serial MVCT images acquired by varied pitches overlapped together, where as there are no significant difference in all p values of intercept & slope of emulational spinal cord (p = 0.761 & 0.277), heart (p = 0.984 & 0.978), lungs (p = 0.992 & 0.980), soft tissue (p = 0.319 & 0.951) and bony structures (p = 0.960 & 0.929) between the most elaborated and the roughest serials of MVCT. Furthermore, gamma index analysis shown that, compared to the dose distribution calculated on MVCT of 'fine', only 0.2% or 1.1% of the points analyzed on MVCT of 'normal' or 'coarse' do not meet the defined gamma criterion. On chest phantom, all registration errors larger than 1 mm appeared at superior-inferior axis, which cannot be avoided with the smallest AP and RI. On pelvic phantom, craniocaudal errors are much smaller than chest, however, AP of 'coarse' presents larger registration errors which can be reduced from 2.90 mm to 0.22 mm by registration technique of 'full image'. AP of 'coarse' with RI of 6 mm is recommended in adaptive radiotherapy (ART) planning to provide craniocaudal longer and faster MVCT scan, while registration technique of 'full image' should be used to avoid large residual error. Considering the trade-off between IGRT and ART, AP of 'normal' with RI of 2 mm was highly recommended in daily practice.
Correcting for the effects of pupil discontinuities with the ACAD method
NASA Astrophysics Data System (ADS)
Mazoyer, Johan; Pueyo, Laurent; N'Diaye, Mamadou; Mawet, Dimitri; Soummer, Rémi; Norman, Colin
2016-07-01
The current generation of ground-based coronagraphic instruments uses deformable mirrors to correct for phase errors and to improve contrast levels at small angular separations. Improving these techniques, several space and ground based instruments are currently developed using two deformable mirrors to correct for both phase and amplitude errors. However, as wavefront control techniques improve, more complex telescope pupil geometries (support structures, segmentation) will soon be a limiting factor for these next generation coronagraphic instruments. The technique presented in this proceeding, the Active Correction of Aperture Discontinuities method, is taking advantage of the fact that most future coronagraphic instruments will include two deformable mirrors, and is proposing to find the shapes and actuator movements to correct for the effect introduced by these complex pupil geometries. For any coronagraph previously designed for continuous apertures, this technique allow to obtain similar performance in contrast with a complex aperture (with segmented and secondary mirror support structures), with high throughput and flexibility to adapt to changing pupil geometry (e.g. in case of segment failure or maintenance of the segments). We here present the results of the parametric analysis realized on the WFIRST pupil for which we obtained high contrast levels with several deformable mirror setups (size, separation between them), coronagraphs (Vortex charge 2, vortex charge 4, APLC) and spectral bandwidths. However, because contrast levels and separation are not the only metrics to maximize the scientific return of an instrument, we also included in this study the influence of these deformable mirror shapes on the throughput of the instrument and sensitivity to pointing jitters. Finally, we present results obtained on another potential space based telescope segmented aperture. The main result of this proceeding is that we now obtain comparable performance than the coronagraphs previously designed for WFIRST. First result from the parametric analysis strongly suggest that the 2 deformable mirror set up (size and distance between them) have a important impact on the performance in contrast and throughput of the final instrument.
A Noninvasive Body Setup Method for Radiotherapy by Using a Multimodal Image Fusion Technique
Zhang, Jie; Chen, Yunxia; Wang, Chenchen; Chu, Kaiyue; Jin, Jianhua; Huang, Xiaolin; Guan, Yue; Li, Weifeng
2017-01-01
Purpose: To minimize the mismatch error between patient surface and immobilization system for tumor location by a noninvasive patient setup method. Materials and Methods: The method, based on a point set registration, proposes a shift for patient positioning by integrating information of the computed tomography scans and that of optical surface landmarks. An evaluation of the method included 3 areas: (1) a validation on a phantom by estimating 100 known mismatch errors between patient surface and immobilization system. (2) Five patients with pelvic tumors were considered. The tumor location errors of the method were measured using the difference between the proposal shift of cone-beam computed tomography and that of our method. (3) The collected setup data from the evaluation of patients were compared with the published performance data of other 2 similar systems. Results: The phantom verification results showed that the method was capable of estimating mismatch error between patient surface and immobilization system in a precision of <0.22 mm. For the pelvic tumor, the method had an average tumor location error of 1.303, 2.602, and 1.684 mm in left–right, anterior–posterior, and superior–inferior directions, respectively. The performance comparison with other 2 similar systems suggested that the method had a better positioning accuracy for pelvic tumor location. Conclusion: By effectively decreasing an interfraction uncertainty source (mismatch error between patient surface and immobilization system) in radiotherapy, the method can improve patient positioning precision for pelvic tumor. PMID:29333959
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Up and Down Quark Masses and Corrections to Dashen's Theorem from Lattice QCD and Quenched QED.
Fodor, Z; Hoelbling, C; Krieg, S; Lellouch, L; Lippert, Th; Portelli, A; Sastre, A; Szabo, K K; Varnhorst, L
2016-08-19
In a previous Letter [Borsanyi et al., Phys. Rev. Lett. 111, 252001 (2013)] we determined the isospin mass splittings of the baryon octet from a lattice calculation based on N_{f}=2+1 QCD simulations to which QED effects have been added in a partially quenched setup. Using the same data we determine here the corrections to Dashen's theorem and the individual up and down quark masses. Our ensembles include 5 lattice spacings down to 0.054 fm, lattice sizes up to 6 fm, and average up-down quark masses all the way down to their physical value. For the parameter which quantifies violations to Dashen's theorem, we obtain ϵ=0.73(2)(5)(17), where the first error is statistical, the second is systematic, and the third is an estimate of the QED quenching error. For the light quark masses we obtain, m_{u}=2.27(6)(5)(4) and m_{d}=4.67(6)(5)(4) MeV in the modified minimal subtraction scheme at 2 GeV and the isospin breaking ratios m_{u}/m_{d}=0.485(11)(8)(14), R=38.2(1.1)(0.8)(1.4), and Q=23.4(0.4)(0.3)(0.4). Our results exclude the m_{u}=0 solution to the strong CP problem by more than 24 standard deviations.
Automated general temperature correction method for dielectric soil moisture sensors
NASA Astrophysics Data System (ADS)
Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao
2017-08-01
An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate
ERIC Educational Resources Information Center
Polio, Charlene
2012-01-01
The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…
Automatic image registration performance for two different CBCT systems; variation with imaging dose
NASA Astrophysics Data System (ADS)
Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.
2014-03-01
The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.
Image Guidance in Radiation Therapy: Techniques and Applications
Kataria, Tejinder
2014-01-01
In modern day radiotherapy, the emphasis on reduction on volume exposed to high radiotherapy doses, improving treatment precision as well as reducing radiation-related normal tissue toxicity has increased, and thus there is greater importance given to accurate position verification and correction before delivering radiotherapy. At present, several techniques that accomplish these goals impeccably have been developed, though all of them have their limitations. There is no single method available that eliminates treatment-related uncertainties without considerably adding to the cost. However, delivering “high precision radiotherapy” without periodic image guidance would do more harm than treating large volumes to compensate for setup errors. In the present review, we discuss the concept of image guidance in radiotherapy, the current techniques available, and their expected benefits and pitfalls. PMID:25587445
Karlsson, Kristin; Lax, Ingmar; Lindbäck, Elias; Poludniowski, Gavin
2017-09-01
Geometrical uncertainties can result in a delivered dose to the tumor different from that estimated in the static treatment plan. The purpose of this project was to investigate the accuracy of the dose calculated to the clinical target volume (CTV) with the dose-shift approximation, in stereotactic body radiation therapy (SBRT) of lung tumors considering setup errors and breathing motion. The dose-shift method was compared with a beam-shift method with dose recalculation. Included were 10 patients (10 tumors) selected to represent a variety of SBRT-treated lung tumors in terms of tumor location, CTV volume, and tumor density. An in-house developed toolkit within a treatment planning system allowed the shift of either the dose matrix or a shift of the beam isocenter with dose recalculation, to simulate setup errors and breathing motion. Setup shifts of different magnitudes (up to 10 mm) and directions as well as breathing with different peak-to-peak amplitudes (up to 10:5:5 mm) were modeled. The resulting dose-volume histograms (DVHs) were recorded and dose statistics were extracted. Generally, both the dose-shift and beam-shift methods resulted in calculated doses lower than the static planned dose, although the minimum (D 98% ) dose exceeded the prescribed dose in all cases, for setup shifts up to 5 mm. The dose-shift method also generally underestimated the dose compared with the beam-shift method. For clinically realistic systematic displacements of less than 5 mm, the results demonstrated that in the minimum dose region within the CTV, the dose-shift method was accurate to 2% (root-mean-square error). Breathing motion only marginally degraded the dose distributions. Averaged over the patients and shift directions, the dose-shift approximation was determined to be accurate to approximately 2% (RMS) within the CTV, for clinically relevant geometrical uncertainties for SBRT of lung tumors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, N; DiCostanzo, D; Fullenkamp, M
2015-06-15
Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Survey of Radar Refraction Error Corrections
2016-11-01
ELECTRONIC TRAJECTORY MEASUREMENTS GROUP RCC 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS DISTRIBUTION A: Approved for...DOCUMENT 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS November 2016 Prepared by Electronic...This page intentionally left blank. Survey of Radar Refraction Error Corrections, RCC 266-16 iii Table of Contents Preface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Park, Y; Sharp, G
Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.
2016-11-01
Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The most popular are coherent techniques such as double random phase encoding. Its main advantage is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. Recently, encryption of digital information in form of binary images has become quite popular. Advantages of using quick response (QR) code in capacity of data container for optical encryption include: 1) any data represented as QR code will have close to white (excluding zero spatial frequency) Fourier spectrum which have good overlapping with encryption key spectrum; 2) built-in algorithm for image scale and orientation correction which simplifies decoding of decrypted QR codes; 3) embedded error correction code allows for successful decryption of information even in case of partial corruption of decrypted image. Optical encryption of digital data in form QR codes using spatially incoherent illumination was experimentally implemented. Two liquid crystal spatial light modulators were used in experimental setup for QR code and encrypting kinoform imaging respectively. Decryption was conducted digitally. Successful decryption of encrypted QR codes is demonstrated.
Dedicated magnetic resonance imaging in the radiotherapy clinic.
Karlsson, Mikael; Karlsson, Magnus G; Nyholm, Tufve; Amies, Christopher; Zackrisson, Björn
2009-06-01
To introduce a novel technology arrangement in an integrated environment and outline the logistics model needed to incorporate dedicated magnetic resonance (MR) imaging in the radiotherapy workflow. An initial attempt was made to analyze the value and feasibility of MR-only imaging compared to computed tomography (CT) imaging, testing the assumption that MR is a better choice for target and healthy tissue delineation in radiotherapy. A 1.5-T MR unit with a 70-cm-bore size was installed close to a linear accelerator, and a special trolley was developed for transporting patients who were fixated in advance between the MR unit and the accelerator. New MR-based workflow procedures were developed and evaluated. MR-only treatment planning has been facilitated, thus avoiding all registration errors between CT and MR scans, but several new aspects of MR imaging must be considered. Electron density information must be obtained by other methods. Generation of digitally reconstructed radiographs (DRR) for x-ray setup verification is not straight forward, and reliable corrections of geometrical distortions must be applied. The feasibility of MR imaging virtual simulation has been demonstrated, but a key challenge to overcome is correct determination of the skeleton, which is often needed for the traditional approach of beam modeling. The trolley solution allows for a highly precise setup for soft tissue tumors without the invasive handling of radiopaque markers. The new logistics model with an integrated MR unit is efficient and will allow for improved tumor definition and geometrical precision without a significant loss of dosimetric accuracy. The most significant development needed is improved bone imaging.
Schedule for CT image guidance in treating prostate cancer with helical tomotherapy
Beldjoudi, G; Yartsev, S; Bauman, G; Battista, J; Van Dyk, J
2010-01-01
The aim of this study was to determine the effect of reducing the number of image guidance sessions and patient-specific target margins on the dose distribution in the treatment of prostate cancer with helical tomotherapy. 20 patients with prostate cancer who were treated with helical tomotherapy using daily megavoltage CT (MVCT) imaging before treatment served as the study population. The average geometric shifts applied for set-up corrections, as a result of co-registration of MVCT and planning kilovoltage CT studies over an increasing number of image guidance sessions, were determined. Simulation of the consequences of various imaging scenarios on the dose distribution was performed for two patients with different patterns of interfraction changes in anatomy. Our analysis of the daily set-up correction shifts for 20 prostate cancer patients suggests that the use of four fractions would result in a population average shift that was within 1 mm of the average obtained from the data accumulated over all daily MVCT sessions. Simulation of a scenario in which imaging sessions are performed at a reduced frequency and the planning target volume margin is adapted provided significantly better sparing of organs at risk, with acceptable reproducibility of dose delivery to the clinical target volume. Our results indicate that four MVCT sessions on helical tomotherapy are sufficient to provide information for the creation of personalised target margins and the establishment of the new reference position that accounts for the systematic error. This simplified approach reduces overall treatment session time and decreases the imaging dose to the patient. PMID:19505966
Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements
NASA Astrophysics Data System (ADS)
Appel, Pontus
2005-01-01
For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.
NASA Astrophysics Data System (ADS)
Ireland, Peter J.; Collins, Lance R.
2012-11-01
Turbulence-induced collision of inertial particles may contribute to the rapid onset of precipitation in warm cumulus clouds. The particle collision frequency is determined from two parameters: the radial distribution function g (r) and the mean inward radial relative velocity
A Study of Vicon System Positioning Performance.
Merriaux, Pierre; Dupuis, Yohan; Boutteau, Rémi; Vasseur, Pascal; Savatier, Xavier
2017-07-07
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.
Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob
2016-09-01
Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.
Comparing the Effectiveness of Error-Correction Strategies in Discrete Trial Training
ERIC Educational Resources Information Center
Turan, Michelle K.; Moroz, Lianne; Croteau, Natalie Paquet
2012-01-01
Error-correction strategies are essential considerations for behavior analysts implementing discrete trial training with children with autism. The research literature, however, is still lacking in the number of studies that compare and evaluate error-correction procedures. The purpose of this study was to compare two error-correction strategies:…
Speed-constrained three-axes attitude control using kinematic steering
NASA Astrophysics Data System (ADS)
Schaub, Hanspeter; Piggott, Scott
2018-06-01
Spacecraft attitude control solutions typically are torque-level algorithms that simultaneously control both the attitude and angular velocity tracking errors. In contrast, robotic control solutions are kinematic steering commands where rates are treated as the control variable, and a servo-tracking control subsystem is present to achieve the desired control rates. In this paper kinematic attitude steering controls are developed where an outer control loop establishes a desired angular response history to a tracking error, and an inner control loop tracks the commanded body angular rates. The overall stability relies on the separation principle of the inner and outer control loops which must have sufficiently different response time scales. The benefit is that the outer steering law response can be readily shaped to a desired behavior, such as limiting the approach angular velocity when a large tracking error is corrected. A Modified Rodrigues Parameters implementation is presented that smoothly saturates the speed response. A robust nonlinear body rate servo loop is developed which includes integral feedback. This approach provides a convenient modular framework that makes it simple to interchange outer and inner control loops to readily setup new control implementations. Numerical simulations illustrate the expected performance for an aggressive reorientation maneuver subject to an unknown external torque.
Wu, Ying Ying; Plakseychuk, Anton; Shimada, Kenji
2014-11-01
Current external fixators for distraction osteogenesis (DO) are unable to correct all types of deformities in the lower limb and are difficult to use because of the lack of a pre-surgical planning system. We propose a DO system that consists of a surgical planner and a new, easy-to-setup unilateral fixator that not only corrects all lower limb deformity, but also generates the contralateral/predefined bone shape. Conventionally, bulky constructs with six or more joints (six degrees of freedom, 6DOF) are needed to correct a 3D deformity. By applying the axis-angle representation, we can achieve that with a compact construct with only two joints (2DOF). The proposed system makes use of computer-aided design software and computational methods to plan and simulate the planned procedure. Results of our stress analysis suggest that the stiffness of our proposed fixator is comparable to that of the Orthofix unilateral external fixator. We tested the surgical system on a model of an adult deformed tibia and the resulting bone trajectory deviates from the target bone trajectory by 1.8mm, which is below our defined threshold error of 2mm. We also extracted the transformation matrix that defines the deformity from the bone model and simulated the planned procedure. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Schindlbeck, Christopher; Pape, Christian; Reithmeier, Eduard
2018-04-16
Alignment of optical components is crucial for the assembly of optical systems to ensure their full functionality. In this paper we present a novel predictor-corrector framework for the sequential assembly of serial optical systems. Therein, we use a hybrid optical simulation model that comprises virtual and identified component positions. The hybrid model is constantly adapted throughout the assembly process with the help of nonlinear identification techniques and wavefront measurements. This enables prediction of the future wavefront at the detector plane and therefore allows for taking corrective measures accordingly during the assembly process if a user-defined tolerance on the wavefront error is violated. We present a novel notation for the so-called hybrid model and outline the work flow of the presented predictor-corrector framework. A beam expander is assembled as demonstrator for experimental verification of the framework. The optical setup consists of a laser, two bi-convex spherical lenses each mounted to a five degree-of-freedom stage to misalign and correct components, and a Shack-Hartmann sensor for wavefront measurements.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors.
Cohen, Michael X; van Gaal, Simon
2014-02-01
We investigated the neural systems underlying conflict detection and error monitoring during rapid online error correction/monitoring mechanisms. We combined data from four separate cognitive tasks and 64 subjects in which EEG and EMG (muscle activity from the thumb used to respond) were recorded. In typical neuroscience experiments, behavioral responses are classified as "error" or "correct"; however, closer inspection of our data revealed that correct responses were often accompanied by "partial errors" - a muscle twitch of the incorrect hand ("mixed correct trials," ~13% of the trials). We found that these muscle twitches dissociated conflicts from errors in time-frequency domain analyses of EEG data. In particular, both mixed-correct trials and full error trials were associated with enhanced theta-band power (4-9Hz) compared to correct trials. However, full errors were additionally associated with power and frontal-parietal synchrony in the delta band. Single-trial robust multiple regression analyses revealed a significant modulation of theta power as a function of partial error correction time, thus linking trial-to-trial fluctuations in power to conflict. Furthermore, single-trial correlation analyses revealed a qualitative dissociation between conflict and error processing, such that mixed correct trials were associated with positive theta-RT correlations whereas full error trials were associated with negative delta-RT correlations. These findings shed new light on the local and global network mechanisms of conflict monitoring and error detection, and their relationship to online action adjustment. © 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, K; Wang, J; Liu, D
2014-06-01
Purpose: Image-guided radiation therapy (IGRT) is one of the major treatment of esophageal cancer. Gray value registration and bone registration are two kinds of image registration, the purpose of this work is to compare which one is more suitable for esophageal cancer patients. Methods: Twenty three esophageal patients were treated by Elekta Synergy, CBCT images were acquired and automatically registered to planning kilovoltage CT scans according to gray value or bone registration. The setup errors were measured in the X, Y and Z axis, respectively. Two kinds of setup errors were analysed by matching T test statistical method. Results: Fourmore » hundred and five groups of CBCT images were available and the systematic and random setup errors (cm) in X, Y, Z directions were 0.35, 0.63, 0.29 and 0.31, 0.53, 0.21 with gray value registration, while 0.37, 0.64, 0.26 and 0.32, 0.55, 0.20 with bone registration, respectively. Compared with bone registration and gray value registration, the setup errors in X and Z axis have significant differences. In Y axis, both measurement comparison results of T value is 0.256 (P value > 0.05); In X axis, the T value is 5.287(P value < 0.05); In Z axis, the T value is −5.138 (P value < 0.05). Conclusion: Gray value registration is recommended in image-guided radiotherapy for esophageal cancer and the other thoracic tumors. Manual registration could be applied when it is necessary. Bone registration is more suitable for the head tumor and pelvic tumor department where composed of redundant interconnected and immobile bone tissue.« less
Evaluation of a head-repositioner and Z-plate system for improved accuracy of dose delivery.
Charney, Sarah C; Lutz, Wendell R; Klein, Mary K; Jones, Pamela D
2009-01-01
Radiation therapy requires accurate dose delivery to targets often identifiable only on computed tomography (CT) images. Translation between the isocenter localized on CT and laser setup for radiation treatment, and interfractional head repositioning are frequent sources of positioning error. The objective was to design a simple, accurate apparatus to eliminate these sources of error. System accuracy was confirmed with phantom and in vivo measurements. A head repositioner that fixates the maxilla via dental mold with fiducial marker Z-plates attached was fabricated to facilitate the connection between the isocenter on CT and laser treatment setup. A phantom study targeting steel balls randomly located within the head repositioner was performed. The center of each ball was marked on a transverse CT slice on which six points of the Z-plate were also visible. Based on the relative position of the six Z-plate points and the ball center, the laser setup position on each Z-plate and a top plate was calculated. Based on these setup marks, orthogonal port films, directed toward each target, were evaluated for accuracy without regard to visual setup. A similar procedure was followed to confirm accuracy of in vivo treatment setups in four dogs using implanted gold seeds. Sequential port films of three dogs were made to confirm interfractional accuracy. Phantom and in vivo measurements confirmed accuracy of 2 mm between isocenter on CT and the center of the treatment dose distribution. Port films confirmed similar accuracy for interfractional treatments. The system reliably connects CT target localization to accurate initial and interfractional radiation treatment setup.
Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G
2014-11-13
A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
Piermattei, Angelo; Kang, Shengwei; Xiao, Mingyong; Tang, Bin; Liao, Xiongfei; Xin, Xin; Grusio, Mattia
2018-01-01
High conformal techniques such as intensity-modulated radiation therapy and volumetric-modulated arc therapy are widely used in overloaded radiotherapy departments. In vivo dosimetric screening is essential in this environment to avoid important dosimetric errors. This work examines the feasibility of introducing in vivo dosimetry (IVD) checks in a radiotherapy routine. The causes of dosimetric disagreements between delivered and planned treatments were identified and corrected during the course of treatment. The efficiency of the corrections performed and the added workload needed for the entire procedure were evaluated. The IVD procedure was based on an electronic portal imaging device. A total of 3682 IVD tests were performed for 147 patients who underwent head and neck, abdomen, pelvis, breast, and thorax radiotherapy treatments. Two types of indices were evaluated and used to determine if the IVD tests were within tolerance levels: the ratio R between the reconstructed and planned isocentre doses and a transit dosimetry based on the γ-analysis of the electronic portal images. The causes of test outside tolerance level were investigated and corrected and IVD test was repeated during subsequent fraction. The time needed for each step of the IVD procedure was registered. Pelvis, abdomen, and head and neck treatments had 10% of tests out of tolerance whereas breast and thorax treatments accounted for up to 25%. The patient setup was the main cause of 90% of the IVD tests out of tolerance and the remaining 10% was due to patient morphological changes. An average time of 42 min per day was sufficient to monitor a daily workload of 60 patients in treatment. This work shows that IVD performed with an electronic portal imaging device is feasible in an overloaded department and enables the timely realignment of the treatment quality indices in order to achieve a patient’s final treatment compliant with the one prescribed. PMID:29432473
The Measurement and Correction of the Periodic Error of the LX200-16 Telescope Driving System
NASA Astrophysics Data System (ADS)
Jeong, Jang Hae; Lee, Young Sam; Lee, Chung Uk
2000-06-01
We examined and corrected the periodic error of the LX200-16 Telescope driving system of Chungbuk National University Campus Observatory. Before correcting, the standard deviation of the periodic error in the direction of East-West was = 7.''2. After correcting,we found that the periodic error was reduced to = 1.''2.
Hochman, Eldad Yitzhak; Orr, Joseph M; Gehring, William J
2014-02-01
Cognitive control in the posterior medial frontal cortex (pMFC) is formulated in models that emphasize adaptive behavior driven by a computation evaluating the degree of difference between 2 conflicting responses. These functions are manifested by an event-related brain potential component coined the error-related negativity (ERN). We hypothesized that the ERN represents a regulative rather than evaluative pMFC process, exerted over the error motor representation, expediting the execution of a corrective response. We manipulated the motor representations of the error and the correct response to varying degrees. The ERN was greater when 1) the error response was more potent than when the correct response was more potent, 2) more errors were committed, 3) fewer and slower corrections were observed, and 4) the error response shared fewer motor features with the correct response. In their current forms, several prominent models of the pMFC cannot be reconciled with these findings. We suggest that a prepotent, unintended error is prone to reach the manual motor processor responsible for response execution before a nonpotent, intended correct response. In this case, the correct response is a correction and its execution must wait until the error is aborted. The ERN may reflect pMFC activity that aimed to suppress the error.
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
Linear positioning laser calibration setup of CNC machine tools
NASA Astrophysics Data System (ADS)
Sui, Xiulin; Yang, Congjing
2002-10-01
The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.
5 CFR 1601.34 - Error correction.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Error correction. 1601.34 Section 1601.34 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS... in the wrong investment fund, will be corrected in accordance with the error correction regulations...
5 CFR 1601.34 - Error correction.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS... in the wrong investment fund, will be corrected in accordance with the error correction regulations...
Estimate of higher order ionospheric errors in GNSS positioning
NASA Astrophysics Data System (ADS)
Hoque, M. Mainul; Jakowski, N.
2008-10-01
Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Error Correcting Optical Mapping Data.
Mukherjee, Kingshuk; Washimkar, Darshan; Muggli, Martin D; Salmela, Leena; Boucher, Christina
2018-05-26
Optical mapping is a unique system that is capable of producing high-resolution, high-throughput genomic map data that gives information about the structure of a genome [21]. Recently it has been used for scaffolding contigs and assembly validation for large-scale sequencing projects, including the maize [32], goat [6], and amborella [4] genomes. However, a major impediment in the use of this data is the variety and quantity of errors in the raw optical mapping data, which are called Rmaps. The challenges associated with using Rmap data are analogous to dealing with insertions and deletions in the alignment of long reads. Moreover, they are arguably harder to tackle since the data is numerical and susceptible to inaccuracy. We develop cOMET to error correct Rmap data, which to the best of our knowledge is the only optical mapping error correction method. Our experimental results demonstrate that cOMET has high prevision and corrects 82.49% of insertion errors and 77.38% of deletion errors in Rmap data generated from the E. coli K-12 reference genome. Out of the deletion errors corrected, 98.26% are true errors. Similarly, out of the insertion errors corrected, 82.19% are true errors. It also successfully scales to large genomes, improving the quality of 78% and 99% of the Rmaps in the plum and goat genomes, respectively. Lastly, we show the utility of error correction by demonstrating how it improves the assembly of Rmap data. Error corrected Rmap data results in an assembly that is more contiguous, and covers a larger fraction of the genome.
First on-sky demonstration of the piezoelectric adaptive secondary mirror.
Guo, Youming; Zhang, Ang; Fan, Xinlong; Rao, Changhui; Wei, Ling; Xian, Hao; Wei, Kai; Zhang, Xiaojun; Guan, Chunlin; Li, Min; Zhou, Luchun; Jin, Kai; Zhang, Junbo; Deng, Jijiang; Zhou, Longfeng; Chen, Hao; Zhang, Xuejun; Zhang, Yudong
2016-12-15
We propose using a piezoelectric adaptive secondary mirror (PASM) in the medium-sized adaptive telescopes with a 2-4 m aperture for structure and control simplification by utilizing the piezoelectric actuators in contrast with the voice-coil adaptive secondary mirror. A closed-loop experimental setup was built for on-sky demonstration of the 73-element PASM developed by our laboratory. In this Letter, the PASM and the closed-loop adaptive optics system are introduced. High-resolution stellar images were obtained by using the PASM to correct high-order wavefront errors in May 2016. To the best of our knowledge, this is the first successful on-sky demonstration of the PASM. The results show that with the PASM as the deformable mirror, the angular resolution of the 1.8 m telescope can be effectively improved.
Quantum State Transfer via Noisy Photonic and Phononic Waveguides
NASA Astrophysics Data System (ADS)
Vermersch, B.; Guimond, P.-O.; Pichler, H.; Zoller, P.
2017-03-01
We describe a quantum state transfer protocol, where a quantum state of photons stored in a first cavity can be faithfully transferred to a second distant cavity via an infinite 1D waveguide, while being immune to arbitrary noise (e.g., thermal noise) injected into the waveguide. We extend the model and protocol to a cavity QED setup, where atomic ensembles, or single atoms representing quantum memory, are coupled to a cavity mode. We present a detailed study of sensitivity to imperfections, and apply a quantum error correction protocol to account for random losses (or additions) of photons in the waveguide. Our numerical analysis is enabled by matrix product state techniques to simulate the complete quantum circuit, which we generalize to include thermal input fields. Our discussion applies both to photonic and phononic quantum networks.
Evaluation of performance of the MACAO systems at the VLTI
NASA Astrophysics Data System (ADS)
Rengaswamy, Sridharan; Haguenauer, Pierre; Brillant, Stephane; Cortes, Angela; Girard, Julien H.; Guisard, Stephane; Paufique, Jérôme; Pino, Andres
2010-07-01
Multiple Application Curvature Adaptive Optics (MACAO) systems are used at the coudé focus of the unit telescopes (UTs) at the La-Silla Paranal Observatory, Paranal, to correct for the wave-front aberrations induced by the atmosphere. These systems are in operation since 2005 and are designed to provide beams with 10 mas residual rms tip-tilt error to the VLTI laboratory. We have initiated several technical studies such as measuring the Strehl ratio of the images recorded at the guiding camera of the VLTI, establishing the optimum setup of the MACAO to get collimated and focused beam down to the VLTI laboratory and to the instruments, and ascertaining the data generated by the real time computer, all aimed at characterizing and improving the overall performance of these systems. In this paper we report the current status of these studies.
Optical digital to analog conversion performance analysis for indoor set-up conditions
NASA Astrophysics Data System (ADS)
Dobesch, Aleš; Alves, Luis Nero; Wilfert, Otakar; Ribeiro, Carlos Gaspar
2017-10-01
In visible light communication (VLC) the optical digital to analog conversion (ODAC) approach was proposed as a suitable driving technique able to overcome light-emitting diode's (LED) non-linear characteristic. This concept is analogous to an electrical digital-to-analog converter (EDAC). In other words, digital bits are binary weighted to represent an analog signal. The method supports elementary on-off based modulations able to exploit the essence of LED's non-linear characteristic allowing simultaneous lighting and communication. In the ODAC concept the reconstruction error does not simply rely upon the converter bit depth as in case of EDAC. It rather depends on communication system set-up and geometrical relation between emitter and receiver as well. The paper describes simulation results presenting the ODAC's error performance taking into account: the optical channel, the LED's half power angle (HPA) and the receiver field of view (FOV). The set-up under consideration examines indoor conditions for a square room with 4 m length and 3 m height, operating with one dominant wavelength (blue) and having walls with a reflection coefficient of 0.8. The achieved results reveal that reconstruction error increases for higher data rates as a result of interference due to multipath propagation.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Patient motion tracking in the presence of measurement errors.
Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter
2009-01-01
The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.
ERIC Educational Resources Information Center
Waugh, Rebecca E.
2010-01-01
Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…
ERIC Educational Resources Information Center
Waugh, Rebecca E.; Alberto, Paul A.; Fredrick, Laura D.
2011-01-01
Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…
Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy
NASA Astrophysics Data System (ADS)
Camps, S. M.; Verhaegen, F.; Paiva Fonesca, G.; de With, P. H. N.; Fontanarosa, D.
2017-03-01
Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.
NASA Astrophysics Data System (ADS)
Post, Vincent E. A.; Banks, Eddie; Brunke, Miriam
2018-02-01
The quantification of groundwater flow near the freshwater-saltwater transition zone at the coast is difficult because of variable-density effects and tidal dynamics. Head measurements were collected along a transect perpendicular to the shoreline at a site south of the city of Adelaide, South Australia, to determine the transient flow pattern. This paper presents a detailed overview of the measurement procedure, data post-processing methods and uncertainty analysis in order to assess how measurement errors affect the accuracy of the inferred flow patterns. A particular difficulty encountered was that some of the piezometers were leaky, which necessitated regular measurements of the electrical conductivity and temperature of the water inside the wells to correct for density effects. Other difficulties included failure of pressure transducers, data logger clock drift and operator error. The data obtained were sufficiently accurate to show that there is net seaward horizontal flow of freshwater in the top part of the aquifer, and a net landward flow of saltwater in the lower part. The vertical flow direction alternated with the tide, but due to the large uncertainty of the head gradients and density terms, no net flow could be established with any degree of confidence. While the measurement problems were amplified under the prevailing conditions at the site, similar errors can lead to large uncertainties everywhere. The methodology outlined acknowledges the inherent uncertainty involved in measuring groundwater flow. It can also assist to establish the accuracy requirements of the experimental setup.
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
[Individual indirect bonding technique (IIBT) using set-up model].
Kyung, H M
1989-01-01
There has been much progress in Edgewise Appliance since E.H. Angle. One of the most important procedures in edgewise appliance is correct bracket position. Not only conventional edgewise appliance but also straight wire appliance & lingual appliance cannot be used more effectively unless the bracket position is accurate. Improper bracket positioning may reveal much problems during treatment, especially in finishing state. It may require either rebonding after the removal of the malpositioned bracket or the greater number of arch wire and the more complex wire bending, causing much difficulty in performing effective treatments. This made me invent Individual Indirect Bonding Technique with the use of multi-purpose set-up model in order to determine a correct and objective bracket position according to individual patients. This technique is more accurate than former indirect bonding techniques in bracket positioning, because it decides the bracket position on a set-up model which has produced to have the occlusal relationship the clinician desired. This technique is especially effective in straight wire appliance and lingual appliance in which the correct bracket positioning is indispensible.
NASA Astrophysics Data System (ADS)
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
Bijsterbosch, Janine D; Lee, Kwang-Hyuk; Hunter, Michael D; Tsoi, Daniel T; Lankappa, Sudheer; Wilkinson, Iain D; Barker, Anthony T; Woodruff, Peter W R
2011-05-01
Our ability to interact physically with objects in the external world critically depends on temporal coupling between perception and movement (sensorimotor timing) and swift behavioral adjustment to changes in the environment (error correction). In this study, we investigated the neural correlates of the correction of subliminal and supraliminal phase shifts during a sensorimotor synchronization task. In particular, we focused on the role of the cerebellum because this structure has been shown to play a role in both motor timing and error correction. Experiment 1 used fMRI to show that the right cerebellar dentate nucleus and primary motor and sensory cortices were activated during regular timing and during the correction of subliminal errors. The correction of supraliminal phase shifts led to additional activations in the left cerebellum and right inferior parietal and frontal areas. Furthermore, a psychophysiological interaction analysis revealed that supraliminal error correction was associated with enhanced connectivity of the left cerebellum with frontal, auditory, and sensory cortices and with the right cerebellum. Experiment 2 showed that suppression of the left but not the right cerebellum with theta burst TMS significantly affected supraliminal error correction. These findings provide evidence that the left lateral cerebellum is essential for supraliminal error correction during sensorimotor synchronization.
Optimized linear motor and digital PID controller setup used in Mössbauer spectrometer
NASA Astrophysics Data System (ADS)
Kohout, Pavel; Kouřil, Lukáš; Navařík, Jakub; Novák, Petr; Pechoušek, Jiří
2014-10-01
Optimization of a linear motor and digital PID controller setup used in a Mössbauer spectrometer is presented. Velocity driving system with a digital PID feedback subsystem was developed in the LabVIEW graphical environment and deployed on the sbRIO real-time hardware device (National Instruments). The most important data acquisition processes are performed as real-time deterministic tasks on an FPGA chip. Velocity transducer of a double loudspeaker type with a power amplifier circuit is driven by the system. Series of calibration measurements were proceeded to find the optimal setup of the P, I, D parameters together with velocity error signal analysis. The shape and given signal characteristics of the velocity error signal are analyzed in details. Remote applications for controlling and monitoring the PID system from computer or smart phone, respectively, were also developed. The best setup and P, I, D parameters were set and calibration spectrum of α-Fe sample with an average nonlinearity of the velocity scale below 0.08% was collected. Furthermore, the width of the spectral line below 0.30 mm/s was observed. Powerful and complex velocity driving system was designed.
Error Detection/Correction in Collaborative Writing
ERIC Educational Resources Information Center
Pilotti, Maura; Chodorow, Martin
2009-01-01
In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leinders, Suzanne M.; Delft University of Technology, Delft; Breedveld, Sebastiaan
Purpose: To investigate how dose distributions for liver stereotactic body radiation therapy (SBRT) can be improved by using automated, daily plan reoptimization to account for anatomy deformations, compared with setup corrections only. Methods and Materials: For 12 tumors, 3 strategies for dose delivery were simulated. In the first strategy, computed tomography scans made before each treatment fraction were used only for patient repositioning before dose delivery for correction of detected tumor setup errors. In adaptive second and third strategies, in addition to the isocenter shift, intensity modulated radiation therapy beam profiles were reoptimized or both intensity profiles and beam orientationsmore » were reoptimized, respectively. All optimizations were performed with a recently published algorithm for automated, multicriteria optimization of both beam profiles and beam angles. Results: In 6 of 12 cases, violations of organs at risk (ie, heart, stomach, kidney) constraints of 1 to 6 Gy in single fractions occurred in cases of tumor repositioning only. By using the adaptive strategies, these could be avoided (<1 Gy). For 1 case, this needed adaptation by slightly underdosing the planning target volume. For 2 cases with restricted tumor dose in the planning phase to avoid organ-at-risk constraint violations, fraction doses could be increased by 1 and 2 Gy because of more favorable anatomy. Daily reoptimization of both beam profiles and beam angles (third strategy) performed slightly better than reoptimization of profiles only, but the latter required only a few minutes of computation time, whereas full reoptimization took several hours. Conclusions: This simulation study demonstrated that replanning based on daily acquired computed tomography scans can improve liver stereotactic body radiation therapy dose delivery.« less
Marker Configuration Model-Based Roentgen Fluoroscopic Analysis.
Garling, Eric H; Kaptein, Bart L; Geleijns, Koos; Nelissen, Rob G H H; Valstar, Edward R
2005-04-01
It remains unknown if and how the polyethylene bearing in mobile bearing knees moves during dynamic activities with respect to the tibial base plate. Marker Configuration Model-Based Roentgen Fluoroscopic Analysis (MCM-based RFA) uses a marker configuration model of inserted tantalum markers in order to accurately estimate the pose of an implant or bone using single plane Roentgen images or fluoroscopic images. The goal of this study is to assess the accuracy of (MCM-Based RFA) in a standard fluoroscopic set-up using phantom experiments and to determine the error propagation with computer simulations. The experimental set-up of the phantom study was calibrated using a calibration box equipped with 600 tantalum markers, which corrected for image distortion and determined the focus position. In the computer simulation study the influence of image distortion, MC-model accuracy, focus position, the relative distance between MC-models and MC-model configuration on the accuracy of MCM-Based RFA were assessed. The phantom study established that the in-plane accuracy of MCM-Based RFA is 0.1 mm and the out-of-plane accuracy is 0.9 mm. The rotational accuracy is 0.1 degrees. A ninth-order polynomial model was used to correct for image distortion. Marker-Based RFA was estimated to have, in a worst case scenario, an in vivo translational accuracy of 0.14 mm (x-axis), 0.17 mm (y-axis), 1.9 mm (z-axis), respectively, and a rotational accuracy of 0.3 degrees. When using fluoroscopy to study kinematics, image distortion and the accuracy of models are important factors, which influence the accuracy of the measurements. MCM-Based RFA has the potential to be an accurate, clinically useful tool for studying kinematics after total joint replacement using standard equipment.
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
Reed-Solomon error-correction as a software patch mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pendley, Kevin D.
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
Lin, Mu-Han; Veltchev, Iavor; Koren, Sion; Ma, Charlie; Li, Jinsgeng
2015-07-08
Robotic radiosurgery system has been increasingly employed for extracranial treatments. This work is aimed to study the feasibility of a cylindrical diode array and a planar ion chamber array for patient-specific QA with this robotic radiosurgery system and compare their performance. Fiducial markers were implanted in both systems to enable image-based setup. An in-house program was developed to postprocess the movie file of the measurements and apply the beam-by-beam angular corrections for both systems. The impact of noncoplanar delivery was then assessed by evaluating the angles created by the incident beams with respect to the two detector arrangements and cross-comparing the planned dose distribution to the measured ones with/without the angular corrections. The sensitivity of detecting the translational (1-3 mm) and the rotational (1°-3°) delivery errors were also evaluated for both systems. Six extracranial patient plans (PTV 7-137 cm³) were measured with these two systems and compared with the calculated doses. The plan dose distributions were calculated with ray-tracing and the Monte Carlo (MC) method, respectively. With 0.8 by 0.8 mm² diodes, the output factors measured with the cylindrical diode array agree better with the commissioning data. The maximum angular correction for a given beam is 8.2% for the planar ion chamber array and 2.4% for the cylindrical diode array. The two systems demonstrate a comparable sensitivity of detecting the translational targeting errors, while the cylindrical diode array is more sensitive to the rotational targeting error. The MC method is necessary for dose calculations in the cylindrical diode array phantom because the ray-tracing algorithm fails to handle the high-Z diodes and the acrylic phantom. For all the patient plans, the cylindrical diode array/ planar ion chamber array demonstrate 100% / > 92% (3%/3 mm) and > 96% / ~ 80% (2%/2 mm) passing rates. The feasibility of using both systems for robotic radiosurgery system patient-specific QA has been demonstrated. For gamma evaluation, 2%/2 mm criteria for cylindrical diode array and 3%/3 mm criteria for planar ion chamber array are suggested. The customized angular correction is necessary as proven by the improved passing rate, especially with the planar ion chamber array system.
76 FR 44010 - Medicare Program; Hospice Wage Index for Fiscal Year 2012; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
.... 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: July 15, 2011. Dawn L. Smalls... corrects technical errors that appeared in the notice of CMS ruling published in the Federal Register on... FR 26731), there were technical errors that are identified and corrected in the Correction of Errors...
Frequency of under-corrected refractive errors in elderly Chinese in Beijing.
Xu, Liang; Li, Jianjun; Cui, Tongtong; Tong, Zhongbiao; Fan, Guizhi; Yang, Hua; Sun, Baochen; Zheng, Yuanyuan; Jonas, Jost B
2006-07-01
The aim of the study was to evaluate the prevalence of under-corrected refractive error among elderly Chinese in the Beijing area. The population-based, cross-sectional, cohort study comprised 4,439 subjects out of 5,324 subjects asked to participate (response rate 83.4%) with an age of 40+ years. It was divided into a rural part [1,973 (44.4%) subjects] and an urban part [2,466 (55.6%) subjects]. Habitual and best-corrected visual acuity was measured. Under-corrected refractive error was defined as an improvement in visual acuity of the better eye of at least two lines with best possible refractive correction. The rate of under-corrected refractive error was 19.4% (95% confidence interval, 18.2, 20.6). In a multiple regression analysis, prevalence and size of under-corrected refractive error in the better eye was significantly associated with lower level of education (P<0.001), female gender (P<0.001), and age (P=0.001). Under-correction of refractive error is relatively common among elderly Chinese in the Beijing area when compared with data from other populations.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Schill, Matthew R.; Varela, J. Esteban; Frisella, Margaret M.; Brunt, L. Michael
2015-01-01
Background We compared performance of validated laparoscopic tasks on four commercially available single site access (SSA) access devices (AD) versus an independent port (IP) SSA set-up. Methods A prospective, randomized comparison of laparoscopic skills performance on four AD (GelPOINT™, SILS™ Port, SSL Access System™, TriPort™) and one IP SSA set-up was conducted. Eighteen medical students (2nd–4th year), four surgical residents, and five attending surgeons were trained to proficiency in multi-port laparoscopy using four laparoscopic drills (peg transfer, bean drop, pattern cutting, extracorporeal suturing) in a laparoscopic trainer box. Drills were then performed in random order on each IP-SSA and AD-SSA set-up using straight laparoscopic instruments. Repetitions were timed and errors recorded. Data are mean ± SD, and statistical analysis was by two-way ANOVA with Tukey HSD post-hoc tests. Results Attending surgeons had significantly faster total task times than residents or students (p< 0.001), but the difference between residents and students was NS. Pair-wise comparisons revealed significantly faster total task times for the IP-SSA set-up compared to all four AD-SSA’s within the student group only (p<0.05). Total task times for residents and attending surgeons showed a similar profile, but the differences were NS. When data for the three groups was combined, the total task time was less for the IP-SSA set-up than for each of the four AD-SSA set-ups (p < 0.001). Similarly,, the IP-SSA set-up was significantly faster than 3 of 4 AD-SSA set-ups for peg transfer, 3 of 4 for pattern cutting, and 2 of 4 for suturing. No significant differences in error rates between IP-SSA and AD-SSA set-ups were detected. Conclusions When compared to an IP-SSA laparoscopic set-up, single site access devices are associated with longer task performance times in a trainer box model, independent of level of training. Task performance was similar across different SSA devices. PMID:21993938
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carson, M; Molineu, A; Taylor, P
Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Adaptive control for accelerators
Eaton, Lawrie E.; Jachim, Stephen P.; Natter, Eckard F.
1991-01-01
An adaptive feedforward control loop is provided to stabilize accelerator beam loading of the radio frequency field in an accelerator cavity during successive pulses of the beam into the cavity. A digital signal processor enables an adaptive algorithm to generate a feedforward error correcting signal functionally determined by the feedback error obtained by a beam pulse loading the cavity after the previous correcting signal was applied to the cavity. Each cavity feedforward correcting signal is successively stored in the digital processor and modified by the feedback error resulting from its application to generate the next feedforward error correcting signal. A feedforward error correcting signal is generated by the digital processor in advance of the beam pulse to enable a composite correcting signal and the beam pulse to arrive concurrently at the cavity.
A systematic comparison of error correction enzymes by next-generation sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A systematic comparison of error correction enzymes by next-generation sequencing
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...
2017-08-01
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
How EFL Students Can Use Google to Correct Their "Untreatable" Written Errors
ERIC Educational Resources Information Center
Geiller, Luc
2014-01-01
This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several "untreatable" written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback…
Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them
ERIC Educational Resources Information Center
Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.
2011-01-01
Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…
ERIC Educational Resources Information Center
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, T; Sheu, R; Todorov, B
2014-06-15
Purpose: To evaluate initial setup accuracy for stereotactic radiosurgery (SRS) between Brainlab frame-based and frameless immobilization system, also to discern the magnitude frameless system has on setup parameters. Methods: The correction shifts from the original setup were compared for total 157 SRS cranial treatments (69 frame-based vs. 88 frameless). All treatments were performed on a Novalis linac with ExacTrac positioning system. Localization box with isocenter overlay was used for initial setup and correction shift was determined by ExacTrac 6D auto-fusion to achieve submillimeter accuracy for treatment. For frameless treatments, mean time interval between simulation and treatment was 5.7 days (rangemore » 0–13). Pearson Chi-Square was used for univariate analysis. Results: The correctional radial shifts (mean±STD, median) for the frame and frameless system measured by ExacTrac were 1.2±1.2mm, 1.1mm and 3.1±3.3mm, 2.0mm, respectively. Treatments with frameless system had a radial shift >2mm more often than those with frames (51.1% vs. 2.9%; p<.0001). To achieve submillimeter accuracy, 85.5% frame-based treatments did not require shift and only 23.9% frameless treatment could succeed with initial setup. There was no statistical significant system offset observed in any direction for either system. For frameless treatments, those treated ≥ 3 days from simulation had statistically higher rates of radial shifts between 1–2mm and >2mm compared to patients treated in a shorter amount of time from simulation (34.3% and 56.7% vs. 28.6% and 33.3%, respectively; p=0.006). Conclusion: Although image-guided positioning system can also achieve submillimeter accuracy for frameless system, users should be cautious regarding the inherent uncertainty of its capability of immobilization. A proper quality assurance procedure for frameless mask manufacturing and a protocol for intra-fraction imaging verification will be crucial for frameless system. Time interval between simulation and treatment was influential to initial setup accuracy. A shorter time frame for frameless SRS treatment could be helpful in minimizing uncertainties in localization.« less
Dinges, Eric; Felderman, Nicole; McGuire, Sarah; Gross, Brandie; Bhatia, Sudershan; Mott, Sarah; Buatti, John; Wang, Dongxu
2015-01-01
Background and Purpose This study evaluates the potential efficacy and robustness of functional bone marrow sparing (BMS) using intensity-modulated proton therapy (IMPT) for cervical cancer, with the goal of reducing hematologic toxicity. Material and Methods IMPT plans with prescription dose of 45 Gy were generated for ten patients who have received BMS intensity-modulated x-ray therapy (IMRT). Functional bone marrow was identified by 18F-flourothymidine positron emission tomography. IMPT plans were designed to minimize the volume of functional bone marrow receiving 5–40 Gy while maintaining similar target coverage and healthy organ sparing as IMRT. IMPT robustness was analyzed with ±3% range uncertainty errors and/or ±3mm translational setup errors in all three principal dimensions. Results In the static scenario, the median dose volume reductions for functional bone marrow by IMPT were: 32% for V5GY, 47% for V10Gy, 54% for V20Gy, and 57% for V40Gy, all with p<0.01 compared to IMRT. With assumed errors, even the worst-case reductions by IMPT were: 23% for V5Gy, 37% for V10Gy, 41% for V20Gy, and 39% for V40Gy, all with p<0.01. Conclusions The potential sparing of functional bone marrow by IMPT for cervical cancer is significant and robust under realistic systematic range uncertainties and clinically relevant setup errors. PMID:25981130
Conversion of radius of curvature to power (and vice versa)
NASA Astrophysics Data System (ADS)
Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.
2015-09-01
Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.
Study of the retardance of a birefringent waveplate at tilt incidence by Mueller matrix ellipsometer
NASA Astrophysics Data System (ADS)
Gu, Honggang; Chen, Xiuguo; Zhang, Chuanwei; Jiang, Hao; Liu, Shiyuan
2018-01-01
Birefringent waveplates are indispensable optical elements for polarization state modification in various optical systems. The retardance of a birefringent waveplate will change significantly when the incident angle of the light varies. Therefore, it is of great importance to study such field-of-view errors on the polarization properties, especially the retardance of a birefringent waveplate, for the performance improvement of the system. In this paper, we propose a generalized retardance formula at arbitrary incidence and azimuth for a general plane-parallel composite waveplate consisting of multiple aligned single waveplates. An efficient method and corresponding experimental set-up have been developed to characterize the retardance versus the field-of-view angle based on a constructed spectroscopic Mueller matrix ellipsometer. Both simulations and experiments on an MgF2 biplate over an incident angle of 0°-8° and an azimuthal angle of 0°-360° are presented as an example, and the dominant experimental errors are discussed and corrected. The experimental results strongly agree with the simulations with a maximum difference of 0.15° over the entire field of view, which indicates the validity and great potential of the presented method for birefringent waveplate characterization at tilt incidence.
A modified adjoint-based grid adaptation and error correction method for unstructured grid
NASA Astrophysics Data System (ADS)
Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi
2018-05-01
Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.
Kuwabara, Masaru; Mansouri, Farshad A.; Buckley, Mark J.
2014-01-01
Monkeys were trained to select one of three targets by matching in color or matching in shape to a sample. Because the matching rule frequently changed and there were no cues for the currently relevant rule, monkeys had to maintain the relevant rule in working memory to select the correct target. We found that monkeys' error commission was not limited to the period after the rule change and occasionally occurred even after several consecutive correct trials, indicating that the task was cognitively demanding. In trials immediately after such error trials, monkeys' speed of selecting targets was slower. Additionally, in trials following consecutive correct trials, the monkeys' target selections for erroneous responses were slower than those for correct responses. We further found evidence for the involvement of the cortex in the anterior cingulate sulcus (ACCs) in these error-related behavioral modulations. First, ACCs cell activity differed between after-error and after-correct trials. In another group of ACCs cells, the activity differed depending on whether the monkeys were making a correct or erroneous decision in target selection. Second, bilateral ACCs lesions significantly abolished the response slowing both in after-error trials and in error trials. The error likelihood in after-error trials could be inferred by the error feedback in the previous trial, whereas the likelihood of erroneous responses after consecutive correct trials could be monitored only internally. These results suggest that ACCs represent both context-dependent and internally detected error likelihoods and promote modes of response selections in situations that involve these two types of error likelihood. PMID:24872558
Efficient error correction for next-generation sequencing of viral amplicons
2012-01-01
Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Inoue, Tatsuya; Widder, Joachim; van Dijk, Lisanne V; Takegawa, Hideki; Koizumi, Masahiko; Takashina, Masaaki; Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru; Saito, Anneyuko I; Sasai, Keisuke; Van't Veld, Aart A; Langendijk, Johannes A; Korevaar, Erik W
2016-11-01
To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2. The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D2 - D98, where D2 and D98 are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to <98% (clinical threshold) in 3 of 10 patients for robust 5-mm evaluations. However, the TC remained >98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. In robustly optimized IMPT for stage III NSCLC, the setup and range uncertainties, breathing motion, and interplay effects have limited impact on target coverage, dose homogeneity, and organ-at-risk dose parameters. Copyright © 2016 Elsevier Inc. All rights reserved.
DNA assembly with error correction on a droplet digital microfluidics platform.
Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B
2018-06-01
Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than expected compared to bench-top assays, suggesting an additional capacity for optimization.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
Fixing Stellarator Magnetic Surfaces
NASA Astrophysics Data System (ADS)
Hanson, James D.
1999-11-01
Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
Bulk locality and quantum error correction in AdS/CFT
NASA Astrophysics Data System (ADS)
Almheiri, Ahmed; Dong, Xi; Harlow, Daniel
2015-04-01
We point out a connection between the emergence of bulk locality in AdS/CFT and the theory of quantum error correction. Bulk notions such as Bogoliubov transformations, location in the radial direction, and the holographic entropy bound all have natural CFT interpretations in the language of quantum error correction. We also show that the question of whether bulk operator reconstruction works only in the causal wedge or all the way to the extremal surface is related to the question of whether or not the quantum error correcting code realized by AdS/CFT is also a "quantum secret sharing scheme", and suggest a tensor network calculation that may settle the issue. Interestingly, the version of quantum error correction which is best suited to our analysis is the somewhat nonstandard "operator algebra quantum error correction" of Beny, Kempf, and Kribs. Our proposal gives a precise formulation of the idea of "subregion-subregion" duality in AdS/CFT, and clarifies the limits of its validity.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
TH-EF-BRB-11: Volumetric Modulated Arc Therapy for Total Body Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Folkerts, M; Hrycushko, B
Purpose: To develop a modern, patient-comfortable total body irradiation (TBI) technique suitable for standard-sized linac vaults. Methods: An indexed rotatable immobilization system (IRIS) was developed to make possible total-body CT imaging and radiation delivery on conventional couches. Treatment consists of multi-isocentric volumetric modulated arc therapy (VMAT) to the upper body and parallel-opposed fields to the lower body. Each isocenter is indexed to the couch and includes a 180° IRIS rotation between the upper and lower body fields. VMAT fields are optimized to satisfy lung dose objectives while achieving a uniform therapeutic dose to the torso. End-to-end tests with a randomore » phantom were used to verify dosimetric characteristics. Treatment plan robustness regarding setup uncertainty was assessed by simulating global and regional isocenter setup shifts on patient data sets. Dosimetric comparisons were made with conventional extended distance, standing TBI (cTBI) plans using a Monte Carlo-based calculation. Treatment efficiency was assessed for eight courses of patient treatment. Results: The IRIS system is level and orthogonal to the scanned CT image plane, with lateral shifts <2mm following rotation. End-to-end tests showed surface doses within ±10% of the prescription dose, field junction doses within ±15% of prescription dose. Plan robustness tests showed <15% changes in dose with global setup errors up to 5mm in each direction. Local 5mm relative setup errors in the chest resulted in < 5% dose changes. Local 5mm shift errors in the pelvic and upper leg junction resulted in <10% dose changes while a 10mm shift error causes dose changes up to 25%. Dosimetric comparison with cTBI showed VMAT-TBI has advantages in preserving chest wall dose with flexibility in leveraging the PTV-body and PTV-lung dose. Conclusion: VMAT-TBI with the IRIS system was shown clinically feasible as a cost-effective approach to TBI for standard-sized linac vaults.« less
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
AXAF VETA-I mirror encircled energy measurements and data reduction
NASA Technical Reports Server (NTRS)
Zhao, Ping; Freeman, Mark D.; Hughes, John P.; Kellogg, Edwin M.; Nguyen, Dan T.; Joy, Marshall; Kolodziejczak, Jeffery J.
1992-01-01
The AXAF VETA-I mirror encircled energy was measured with a series of apertures and two flow gas proportional counters at five X-ray energies ranging from 0.28 to 2.3 keV. The proportional counter has a thin plastic window with an opaque wire mesh supporting grid. Depending on the counter position, this mesh can cause the X-ray transmission to vary as much as +/-9 percent, which directly translates into an error in the encircled energy. In order to correct this wire mesh effect, window scan measurements were made, in which the counter was scanned in both horizontal (Y) and vertical (Z) directions with the aperture fixed. Post VETA measurement of the VXDS setup were made to determine the exact geometry and position of the mesh grid. Computer models of the window mesh were developed to simulate the X-ray transmission based on this measurement. The window scan data were fitted to such mesh models and corrections were made. After this study, the mesh effect was well understood and the final results of the encircled energy were obtained with an uncertainty of less than 0.8 percent.
NASA Astrophysics Data System (ADS)
Vidi, S.; Rausch, S.; Ebert, H. P.; Löhberg, A.; Petry, D.
2013-05-01
Measurements were done on a carbon fiber reinforced composite (CFC) sample tested for the space probe Bepi Colombo by using the guarded hot-plate (GHP) method. The values of interest were the thermal transmittance through the samples, (56.3 ± 3.6) W · m-2 · K-1, and the effective thermal conductivity (1.06 ± 0.07) W · m-1 · K-1. The samples consist of a light honeycomb core attached to thicker surface plates. Due to this construction, the effective thermal conductivity parallel to the face plates is higher than the effective thermal conductivity through the sample. This leads to lateral heat gains or losses during the GHP measurement, which in return can lead to erroneous results. Furthermore, due to the high rigidity of the CFC material, there will be high contact resistances between the samples and the GHP apparatus plates. The separation of these thermal contact resistances from the total measured thermal resistance is essential in order to achieve correct results. Good results were achieved using a special measurement setup and a lateral correction method designed to reduce errors due to lateral heat flows.
Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi
2015-05-01
Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Yong; Zhou, Yong-Kang; Chen, Yi-Xing
Objective: A comprehensive clinical evaluation was conducted, assessing the Body Pro-Lok immobilization and positioning system to facilitate hypofractionated radiotherapy of intrahepatic hepatocellular carcinoma (HCC), using helical tomotherapy to improve treatment precision. Methods: Clinical applications of the Body Pro-Lok system were investigated (as above) in terms of interfractional and intrafractional setup errors and compressive abdominal breath control. To assess interfractional setup errors, a total of 42 patients who were given 5 to 20 fractions of helical tomotherapy for intrahepatic HCC were analyzed. Overall, 15 patients were immobilized using simple vacuum cushion (group A), and the Body Pro-Lok system was used inmore » 27 patients (group B), performing megavoltage computed tomography (MVCT) scans 196 times and 435 times, respectively. Pretreatment MVCT scans were registered to the planning kilovoltage computed tomography (KVCT) for error determination, and group comparisons were made. To establish intrafractional setup errors, 17 patients with intrahepatic HCC were selected at random for immobilization by Body Pro-Lok system, undergoing MVCT scans after helical tomotherapy every week. A total of 46 MVCT re-scans were analyzed for this purpose. In researching breath control, 12 patients, randomly selected, were immobilized by Body Pro-Lok system and subjected to 2-phase 4-dimensional CT (4DCT) scans, with compressive abdominal control or in freely breathing states, respectively. Respiratory-induced liver motion was then compared. Results: Mean interfractional setup errors were as follows: (1) group A: X, 2.97 ± 2.47 mm; Y, 4.85 ± 4.04 mm; and Z, 3.77 ± 3.21 mm; pitch, 0.66 ± 0.62°; roll, 1.09 ± 1.06°; and yaw, 0.85 ± 0.82°; and (2) group B: X, 2.23 ± 1.79 mm; Y, 4.10 ± 3.36 mm; and Z, 1.67 ± 1.91 mm; pitch, 0.45 ± 0.38°; roll, 0.77 ± 0.63°; and yaw, 0.52 ± 0.49°. Between-group differences were statistically significant in 6 directions (p < 0.05). Mean intrafractional setup errors with use of the Body Pro-Lok system were as follows: X, 0.41 ± 0.46 mm; Y, 0.86 ± 0.80 mm; Z, 0.33 ± 0.44 mm; and roll, 0.12 ± 0.19°. Mean liver-induced respiratory motion determinations were as follows: (1) abdominal compression: X, 2.33 ± 1.22 mm; Y, 5.11 ± 2.05 mm; Z, 2.13 ± 1.05 mm; and 3D vector, 6.22 ± 1.94 mm; and (2) free breathing: X, 3.48 ± 1.14 mm; Y, 9.83 ± 3.00 mm; Z, 3.38 ± 1.59 mm; and 3D vector, 11.07 ± 3.16 mm. Between-group differences were statistically different in 4 directions (p < 0.05). Conclusions: The Body Pro-Lok system is capable of improving interfractional and intrafractional setup accuracy and minimizing tumor movement owing to respirations in patients with intrahepatic HCC during hypofractionated helical tomotherapy.« less
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Evaluate error correction ability of magnetorheological finishing by smoothing spectral function
NASA Astrophysics Data System (ADS)
Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin
2014-08-01
Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.
NASA Astrophysics Data System (ADS)
Roccia, S.; Gaulard, C.; Étilé, A.; Chakma, R.
2017-07-01
In the context of nuclear orientation, we propose a new method to correct the multipole mixing ratios for asymmetries in the geometry of the setup but also in the detection system. This method is also robust against temperature fluctuations, beam intensity fluctuations and uncertainties in the nuclear structure of the nuclei. Additionally, this method provides a natural way to combine data from different detectors and make good use of all available statistics. We could use this method to demonstrate the accuracy that can be reached with the PolarEx setup now installed at the ALTO facility.
Initial experience in treating lung cancer with helical tomotherapy
Yartsev, S; Dar, AR; Woodford, C; Wong, E; Bauman, G; Van Dyk, J
2007-01-01
Helical tomotherapy is a new form of image-guided radiation therapy that combines features of a linear accelerator and a helical computed tomography (CT) scanner. Megavoltage CT (MVCT) data allow the verification and correction of patient setup on the couch by comparison and image registration with the kilovoltage CT multi-slice images used for treatment planning. An 84-year-old male patient with Stage III bulky non-small cell lung cancer was treated on a Hi-ART II tomotherapy unit. Daily MVCT imaging was useful for setup corrections and signaled the need to adapt the delivery plan when the patient’s anatomy changed significantly. PMID:21614260
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
ERIC Educational Resources Information Center
Munoz, Carlos A.
2011-01-01
Very often, second language (L2) writers commit the same type of errors repeatedly, despite being corrected directly or indirectly by teachers or peers (Semke, 1984; Truscott, 1996). Apart from discouraging teachers from providing error correction feedback, this also makes them hesitant as to what form of corrective feedback to adopt. Ferris…
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Contingent negative variation (CNV) associated with sensorimotor timing error correction.
Jang, Joonyong; Jones, Myles; Milne, Elizabeth; Wilson, Daniel; Lee, Kwang-Hyuk
2016-02-15
Detection and subsequent correction of sensorimotor timing errors are fundamental to adaptive behavior. Using scalp-recorded event-related potentials (ERPs), we sought to find ERP components that are predictive of error correction performance during rhythmic movements. Healthy right-handed participants were asked to synchronize their finger taps to a regular tone sequence (every 600 ms), while EEG data were continuously recorded. Data from 15 participants were analyzed. Occasional irregularities were built into stimulus presentation timing: 90 ms before (advances: negative shift) or after (delays: positive shift) the expected time point. A tapping condition alternated with a listening condition in which identical stimulus sequence was presented but participants did not tap. Behavioral error correction was observed immediately following a shift, with a degree of over-correction with positive shifts. Our stimulus-locked ERP data analysis revealed, 1) increased auditory N1 amplitude for the positive shift condition and decreased auditory N1 modulation for the negative shift condition; and 2) a second enhanced negativity (N2) in the tapping positive condition, compared with the tapping negative condition. In response-locked epochs, we observed a CNV (contingent negative variation)-like negativity with earlier latency in the tapping negative condition compared with the tapping positive condition. This CNV-like negativity peaked at around the onset of subsequent tapping, with the earlier the peak, the better the error correction performance with the negative shifts while the later the peak, the better the error correction performance with the positive shifts. This study showed that the CNV-like negativity was associated with the error correction performance during our sensorimotor synchronization study. Auditory N1 and N2 were differentially involved in negative vs. positive error correction. However, we did not find evidence for their involvement in behavioral error correction. Overall, our study provides the basis from which further research on the role of the CNV in perceptual and motor timing can be developed. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji
2017-03-01
An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
Practical considerations for coil-wrapped Distributed Temperature Sensing setups
NASA Astrophysics Data System (ADS)
Solcerova, Anna; van Emmerik, Tim; Hilgersom, Koen; van de Giesen, Nick
2015-04-01
Fiber-optic Distributed Temperature Sensing (DTS) has been applied widely in hydrological and meteorological systems. For example, DTS has been used to measure streamflow, groundwater, soil moisture and temperature, air temperature, and lake energy fluxes. Many of these applications require a spatial monitoring resolution smaller than the minimum resolution of the DTS device. Therefore, measuring with these resolutions requires a custom made setup. To obtain both high temporal and high spatial resolution temperature measurements, fiber-optic cable is often wrapped around, and glued to, a coil, for example a PVC conduit. For these setups, it is often assumed that the construction characteristics (e.g., the coil material, shape, diameter) do not influence the DTS temperature measurements significantly. This study compares DTS datasets obtained during four measurement campaigns. The datasets were acquired using different setups, allowing to investigate the influence of the construction characteristics on the monitoring results. This comparative study suggests that the construction material, shape, diameter, and way of attachment can have a significant influence on the results. We present a qualitative and quantitative approximation of errors introduced through the selection of the construction, e.g., choice of coil material, influence of solar radiation, coil diameter, and cable attachment method. Our aim is to provide insight in factors that influence DTS measurements, which designers of future DTS measurements setups can take into account. Moreover, we present a number of solutions to minimize these errors for improved temperature retrieval using DTS.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Optical alignment procedure utilizing neural networks combined with Shack-Hartmann wavefront sensor
NASA Astrophysics Data System (ADS)
Adil, Fatime Zehra; Konukseven, Erhan İlhan; Balkan, Tuna; Adil, Ömer Faruk
2017-05-01
In the design of pilot helmets with night vision capability, to not limit or block the sight of the pilot, a transparent visor is used. The reflected image from the coated part of the visor must coincide with the physical human sight image seen through the nonreflecting regions of the visor. This makes the alignment of the visor halves critical. In essence, this is an alignment problem of two optical parts that are assembled together during the manufacturing process. Shack-Hartmann wavefront sensor is commonly used for the determination of the misalignments through wavefront measurements, which are quantified in terms of the Zernike polynomials. Although the Zernike polynomials provide very useful feedback about the misalignments, the corrective actions are basically ad hoc. This stems from the fact that there exists no easy inverse relation between the misalignment measurements and the physical causes of the misalignments. This study aims to construct this inverse relation by making use of the expressive power of the neural networks in such complex relations. For this purpose, a neural network is designed and trained in MATLAB® regarding which types of misalignments result in which wavefront measurements, quantitatively given by Zernike polynomials. This way, manual and iterative alignment processes relying on trial and error will be replaced by the trained guesses of a neural network, so the alignment process is reduced to applying the counter actions based on the misalignment causes. Such a training requires data containing misalignment and measurement sets in fine detail, which is hard to obtain manually on a physical setup. For that reason, the optical setup is completely modeled in Zemax® software, and Zernike polynomials are generated for misalignments applied in small steps. The performance of the neural network is experimented and found promising in the actual physical setup.
Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.
Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S
2008-10-01
Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage.
Supporting Dictation Speech Recognition Error Correction: The Impact of External Information
ERIC Educational Resources Information Center
Shi, Yongmei; Zhou, Lina
2011-01-01
Although speech recognition technology has made remarkable progress, its wide adoption is still restricted by notable effort made and frustration experienced by users while correcting speech recognition errors. One of the promising ways to improve error correction is by providing user support. Although support mechanisms have been proposed for…
A Hybrid Approach for Correcting Grammatical Errors
ERIC Educational Resources Information Center
Lee, Kiyoung; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
2015-01-01
This paper presents a hybrid approach for correcting grammatical errors in the sentences uttered by Korean learners of English. The error correction system plays an important role in GenieTutor, which is a dialogue-based English learning system designed to teach English to Korean students. During the talk with GenieTutor, grammatical error…
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
ERIC Educational Resources Information Center
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
The Effect of Error Correction Feedback on the Collocation Competence of Iranian EFL Learners
ERIC Educational Resources Information Center
Jafarpour, Ali Akbar; Sharifi, Abolghasem
2012-01-01
Collocations are one of the most important elements in language proficiency but the effect of error correction feedback of collocations has not been thoroughly examined. Some researchers report the usefulness and importance of error correction (Hyland, 1990; Bartram & Walton, 1991; Ferris, 1999; Chandler, 2003), while others showed that error…
A Support System for Error Correction Questions in Programming Education
ERIC Educational Resources Information Center
Hachisu, Yoshinari; Yoshida, Atsushi
2014-01-01
For supporting the education of debugging skills, we propose a system for generating error correction questions of programs and checking the correctness. The system generates HTML files for answering questions and CGI programs for checking answers. Learners read and answer questions on Web browsers. For management of error injection, we have…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
..., Medicare--Hospital Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program.... SUMMARY: This document corrects a typographical error that appeared in the notice published in the Federal... typographical error that is identified and corrected in the Correction of Errors section below. II. Summary of...
Tropospheric Correction for InSAR Using Interpolated ECMWF Data and GPS Zenith Total Delay
NASA Technical Reports Server (NTRS)
Webb, Frank H.; Fishbein, Evan F.; Moore, Angelyn W.; Owen, Susan E.; Fielding, Eric J.; Granger, Stephanie L.; Bjorndahl, Fredrik; Lofgren Johan
2011-01-01
To mitigate atmospheric errors caused by the troposphere, which is a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging, a tropospheric correction method has been developed using data from the European Centre for Medium- Range Weather Forecasts (ECMWF) and the Global Positioning System (GPS). The ECMWF data was interpolated using a Stretched Boundary Layer Model (SBLM), and ground-based GPS estimates of the tropospheric delay from the Southern California Integrated GPS Network were interpolated using modified Gaussian and inverse distance weighted interpolations. The resulting Zenith Total Delay (ZTD) correction maps have been evaluated, both separately and using a combination of the two data sets, for three short-interval InSAR pairs from Envisat during 2006 on an area stretching from northeast from the Los Angeles basin towards Death Valley. Results show that the root mean square (rms) in the InSAR images was greatly reduced, meaning a significant reduction in the atmospheric noise of up to 32 percent. However, for some of the images, the rms increased and large errors remained after applying the tropospheric correction. The residuals showed a constant gradient over the area, suggesting that a remaining orbit error from Envisat was present. The orbit reprocessing in ROI_pac and the plane fitting both require that the only remaining error in the InSAR image be the orbit error. If this is not fulfilled, the correction can be made anyway, but it will be done using all remaining errors assuming them to be orbit errors. By correcting for tropospheric noise, the biggest error source is removed, and the orbit error becomes apparent and can be corrected for
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
Peeling Away Timing Error in NetFlow Data
NASA Astrophysics Data System (ADS)
Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin
In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.
Amiri, Shahram; Wilson, David R; Masri, Bassam A; Sharma, Gulshan; Anglin, Carolyn
2011-06-03
Determining the 3D pose of the patella after total knee arthroplasty is challenging. The commonly used single-plane fluoroscopy is prone to large errors in the clinically relevant mediolateral direction. A conventional fixed bi-planar setup is limited in the minimum angular distance between the imaging planes necessary for visualizing the patellar component, and requires a highly flexible setup to adjust for the subject-specific geometries. As an alternative solution, this study investigated the use of a novel multi-planar imaging setup that consists of a C-arm tracked by an external optoelectric tracking system, to acquire calibrated radiographs from multiple orientations. To determine the accuracies, a knee prosthesis was implanted on artificial bones and imaged in simulated 'Supine' and 'Weightbearing' configurations. The results were compared with measures from a coordinate measuring machine as the ground-truth reference. The weightbearing configuration was the preferred imaging direction with RMS errors of 0.48 mm and 1.32 ° for mediolateral shift and tilt of the patella, respectively, the two most clinically relevant measures. The 'imaging accuracies' of the system, defined as the accuracies in 3D reconstruction of a cylindrical ball bearing phantom (so as to avoid the influence of the shape and orientation of the imaging object), showed an order of magnitude (11.5 times) reduction in the out-of-plane RMS errors in comparison to single-plane fluoroscopy. With this new method, complete 3D pose of the patellofemoral and tibiofemoral joints during quasi-static activities can be determined with a many-fold (up to 8 times) (3.4mm) improvement in the out-of-plane accuracies compared to a conventional single-plane fluoroscopy setup. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automatic Recognition of Phonemes Using a Syntactic Processor for Error Correction.
1980-12-01
OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS AFIT/GE/EE/8D-45 Robert B. ’Taylor 2Lt USAF Approved for public release...distribution unlimilted. AbP AFIT/GE/EE/ 80D-45 AUTOMATIC RECOGNITION OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS Presented to the...Testing ..................... 37 Bayes Decision Rule for Minimum Error ........... 37 Bayes Decision Rule for Minimum Risk ............ 39 Mini Max Test
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Error correcting circuit design with carbon nanotube field effect transistors
NASA Astrophysics Data System (ADS)
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
Information retrieval based on single-pixel optical imaging with quick-response code
NASA Astrophysics Data System (ADS)
Xiao, Yin; Chen, Wen
2018-04-01
Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Joint PET-MR respiratory motion models for clinical PET motion correction
NASA Astrophysics Data System (ADS)
Manber, Richard; Thielemans, Kris; Hutton, Brian F.; Wan, Simon; McClelland, Jamie; Barnes, Anna; Arridge, Simon; Ourselin, Sébastien; Atkinson, David
2016-09-01
Patient motion due to respiration can lead to artefacts and blurring in positron emission tomography (PET) images, in addition to quantification errors. The integration of PET with magnetic resonance (MR) imaging in PET-MR scanners provides complementary clinical information, and allows the use of high spatial resolution and high contrast MR images to monitor and correct motion-corrupted PET data. In this paper we build on previous work to form a methodology for respiratory motion correction of PET data, and show it can improve PET image quality whilst having minimal impact on clinical PET-MR protocols. We introduce a joint PET-MR motion model, using only 1 min per PET bed position of simultaneously acquired PET and MR data to provide a respiratory motion correspondence model that captures inter-cycle and intra-cycle breathing variations. In the model setup, 2D multi-slice MR provides the dynamic imaging component, and PET data, via low spatial resolution framing and principal component analysis, provides the model surrogate. We evaluate different motion models (1D and 2D linear, and 1D and 2D polynomial) by computing model-fit and model-prediction errors on dynamic MR images on a data set of 45 patients. Finally we apply the motion model methodology to 5 clinical PET-MR oncology patient datasets. Qualitative PET reconstruction improvements and artefact reduction are assessed with visual analysis, and quantitative improvements are calculated using standardised uptake value (SUVpeak and SUVmax) changes in avid lesions. We demonstrate the capability of a joint PET-MR motion model to predict respiratory motion by showing significantly improved image quality of PET data acquired before the motion model data. The method can be used to incorporate motion into the reconstruction of any length of PET acquisition, with only 1 min of extra scan time, and with no external hardware required.
Effectiveness of base-of-skull immobilization system in a compact proton therapy setting.
Shafai-Erfani, Ghazal; Willoughby, Twyla; Ramakrishna, Naren; Meeks, Sanford; Kelly, Patrick; Zeidan, Omar
2018-05-01
The purpose of this study was to investigate daily repositioning accuracy by analyzing inter- and intra-fractional uncertainties associated with patients treated for intracranial or base of skull tumors in a compact proton therapy system with 6 degrees of freedom (DOF) robotic couch and a thermoplastic head mask indexed to a base of skull (BoS) frame. Daily orthogonal kV alignment images at setup position before and after daily treatments were analyzed for 33 patients. The system was composed of a new type of thermoplastic mask, a bite block, and carbon-fiber BoS couch-top insert specifically designed for proton therapy treatments. The correctional shifts in robotic treatment table with 6 DOF were evaluated and recorded based on over 1500 planar kV image pairs. Correctional shifts for patients with and without bite blocks were compared. Systematic and random errors were evaluated for all 6 DOF coordinates available for daily vector corrections. Uncertainties associated with geometrical errors and their sources, in addition to robustness analysis of various combinations of immobilization components were presented. Analysis of 644 fractions including patients with and without a bite block shows that the BoS immobilization system is capable of maintaining intra-fraction localization with submillimeter accuracy (in nearly 83%, 86%, 95% of cases along SI, LAT, and PA, respectively) in translational coordinates and subdegree precision (in 98.85%, 98.85%, and 96.4% of cases for roll, pitch, and yaw respectively) in rotational coordinates. The system overall fares better in intra-fraction localization precision compared to previously reported particle therapy immobilization systems. The use of a mask-attached type bite block has marginal impact on inter- or intra-fraction uncertainties compared to no bite block. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
High-rate dead-time corrections in a general purpose digital pulse processing system
Abbene, Leonardo; Gerardi, Gaetano
2015-01-01
Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
How to Correct a Task Error: Task-Switch Effects Following Different Types of Error Correction
ERIC Educational Resources Information Center
Steinhauser, Marco
2010-01-01
It has been proposed that switch costs in task switching reflect the strengthening of task-related associations and that strengthening is triggered by response execution. The present study tested the hypothesis that only task-related responses are able to trigger strengthening. Effects of task strengthening caused by error corrections were…
NASA Astrophysics Data System (ADS)
Saga, R. S.; Jauhari, W. A.; Laksono, P. W.
2017-11-01
This paper presents an integrated inventory model which consists of single vendor and buyer. The buyer managed its inventory periodically and orders products from the vendor to satisfy the end customer’s demand, where the annual demand and the ordering cost were in the fuzzy environment. The buyer used a service level constraint instead of the stock-out cost term, so that the stock-out level per cycle was bounded. Then, the vendor produced and delivered products to the buyer. The vendor had a choice to commit an investment to reduce the setup cost. However, the vendor’s production process was imperfect, thus the lot delivered contained some defective products. Moreover, the buyer’s inspection process was not error-free since the inspector could be mistaken in categorizing the product’s quality. The objective was to find the optimum value for the review period, the setup cost, and the number of deliveries in one production cycle which might minimize the joint total cost. Furthermore, the algorithm and numerical example were provided to illustrate the application of the model.
Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET
NASA Astrophysics Data System (ADS)
Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan
2013-06-01
TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.
Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror
NASA Astrophysics Data System (ADS)
Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu
2017-02-01
Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.
Local concurrent error detection and correction in data structures using virtual backpointers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.C.J.; Chen, P.P.; Fuchs, W.K.
1989-11-01
A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, C. C.; Chen, P. P.; Fuchs, W. K.
1987-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
Analyzing the errors of DFT approximations for compressed water systems
NASA Astrophysics Data System (ADS)
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.
Analyzing the errors of DFT approximations for compressed water systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
An Ensemble Method for Spelling Correction in Consumer Health Questions
Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina
2015-01-01
Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208
Freeform correction polishing for optics with semi-kinematic mounting
NASA Astrophysics Data System (ADS)
Huang, Chien-Yao; Kuo, Ching-Hsiang; Peng, Wei-Jei; Yu, Zong-Ru; Ho, Cheng-Fang; Hsu, Ming-Ying; Hsu, Wei-Yao
2015-10-01
Several mounting configurations could be applied to opto-mechanical design for achieving high precise optical system. The retaining ring mounting is simple and cost effective. However, it would deform the optics due to its unpredictable over-constraint forces. The retaining ring can be modified to three small contact areas becoming a semi-kinematic mounting. The semi-kinematic mounting can give a fully constrained in lens assembly and avoid the unpredictable surface deformation. However, there would be still a deformation due to self-weight in large optics especially in vertical setup applications. The self-weight deformation with a semi-kinematic mounting is a stable, repeatable and predictable combination of power and trefoil aberrations. This predictable deformation can be pre-compensated onto the design surface and be corrected by using CNC polisher. Thus it is a freeform surface before mounting to the lens cell. In this study, the freeform correction polishing is demonstrated in a Φ150 lens with semi-kinematic mounting. The clear aperture of the lens is Φ143 mm. We utilize ANSYS simulation software to analyze the lens deformation due to selfweight deformation with semi-kinematic mounting. The simulation results of the self-weight deformation are compared with the measurement results of the assembled lens cell using QED aspheric stitching interferometer (ASI). Then, a freeform surface of a lens with semi-kinematic mounting due to self-weight deformation is verified. This deformation would be corrected by using QED Magnetorheological Finishing (MRF® ) Q-flex 300 polishing machine. The final surface form error of the assembled lens cell after MRF figuring is 0.042 λ in peak to valley (PV).
Effect of single vision soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen
2012-07-01
To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
NASA Astrophysics Data System (ADS)
Ammazzalorso, F.; Bednarz, T.; Jelen, U.
2014-03-01
We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inoue, Tatsuya; Widder, Joachim; Dijk, Lisanne V. van
2016-11-01
Purpose: To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Methods and Materials: Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2.more » The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D{sub 2} − D{sub 98}, where D{sub 2} and D{sub 98} are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. Results: The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to <98% (clinical threshold) in 3 of 10 patients for robust 5-mm evaluations. However, the TC remained >98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. Conclusions: In robustly optimized IMPT for stage III NSCLC, the setup and range uncertainties, breathing motion, and interplay effects have limited impact on target coverage, dose homogeneity, and organ-at-risk dose parameters.« less
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
ERIC Educational Resources Information Center
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
27 CFR 46.119 - Errors disclosed by taxpayers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... that the name and address are correctly stated; if not, the taxpayer must return the stamp to the TTB officer who issued it, with a statement showing the nature of the error and the correct name or address... stamp with that of the Form 5630.5t in TTB files, correct the error if made in the TTB office, and...
ERIC Educational Resources Information Center
Alamri, Bushra; Fawzi, Hala Hassan
2016-01-01
Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-08-17
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Samantha, E-mail: samantha.warren@oncology.ox.ac.uk; Partridge, Mike; Bolsi, Alessandra
Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods andmore » Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV){sub 50Gy} or PTV{sub 62.5Gy} (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D{sub 98} was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D{sub 98} was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D{sub 98} was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D{sub 98} was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial.« less
Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J.; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.
2016-01-01
Purpose Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. PMID:27084641
Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A
2016-05-01
Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose-volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Beavis, Andrew W.; Ward, James W.
2014-03-01
Purpose: In recent years there has been interest in using Computer Simulation within Medical training. The VERT (Virtual Environment for Radiotherapy Training) system is a Flight Simulator for Radiation Oncology professionals, wherein fundamental concepts, techniques and problematic scenarios can be safely investigated. Methods: The system provides detailed simulations of several Linacs and the ability to display DICOM treatment plans. Patients can be mis-positioned with 'set-up errors' which can be explored visually, dosimetrically and using IGRT. Similarly, a variety of Linac calibration and configuration parameters can be altered manually or randomly via controlled errors in the simulated 3D Linac and its component parts. The implication of these can be investigated by following through a treatment scenario or using QC devices available within a Physics software module. Results: One resultant exercise is a systematic mis-calibration of 'lateral laser height' by 2mm. The offset in patient alignment is easily identified using IGRT and once corrected by reference to the 'in-room monitor'. The dosimetric implication is demonstrated to be 0.4% by setting a dosimetry phantom by the lasers (and ignoring TSD information). Finally, the need for recalibration can be shown by the Laser Alignment Phantom or by reference to the front pointer. Conclusions: The VERT system provides a realistic environment for training and enhancing understanding of radiotherapy concepts and techniques. Linac error conditions can be explored in this context and valuable experience gained in a controlled manner in a compressed period of time.
Vecchiato, G; De Vico Fallani, F; Astolfi, L; Toppi, J; Cincotti, F; Mattia, D; Salinari, S; Babiloni, F
2010-08-30
This paper presents some considerations about the use of adequate statistical techniques in the framework of the neuroelectromagnetic brain mapping. With the use of advanced EEG/MEG recording setup involving hundred of sensors, the issue of the protection against the type I errors that could occur during the execution of hundred of univariate statistical tests, has gained interest. In the present experiment, we investigated the EEG signals from a mannequin acting as an experimental subject. Data have been collected while performing a neuromarketing experiment and analyzed with state of the art computational tools adopted in specialized literature. Results showed that electric data from the mannequin's head presents statistical significant differences in power spectra during the visualization of a commercial advertising when compared to the power spectra gathered during a documentary, when no adjustments were made on the alpha level of the multiple univariate tests performed. The use of the Bonferroni or Bonferroni-Holm adjustments returned correctly no differences between the signals gathered from the mannequin in the two experimental conditions. An partial sample of recently published literature on different neuroscience journals suggested that at least the 30% of the papers do not use statistical protection for the type I errors. While the occurrence of type I errors could be easily managed with appropriate statistical techniques, the use of such techniques is still not so largely adopted in the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvis, Lesley A., E-mail: Lesley.a.jarvis@hitchcock.org; Norris Cotton Cancer Center at the Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire; Zhang, Rongxiao
Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans,more » mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.« less
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
ERIC Educational Resources Information Center
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Quantum steganography and quantum error-correction
NASA Astrophysics Data System (ADS)
Shaw, Bilal A.
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.
ERIC Educational Resources Information Center
Taylor, David P.
1995-01-01
Presents an experiment that demonstrates conservation of momentum and energy using a box on the ground moving backwards as it is struck by a projectile. Discusses lab calculations, setup, management, errors, and improvements. (JRH)
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
Error Correction for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei
2016-05-01
The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Refractive error and presbyopia among adults in Fiji.
Brian, Garry; Pearce, Matthew G; Ramke, Jacqueline
2011-04-01
To characterize refractive error, presbyopia and their correction among adults aged ≥ 40 years in Fiji, and contribute to a regional overview of these conditions. A population-based cross-sectional survey using multistage cluster random sampling. Presenting distance and near vision were measured and dilated slitlamp examination performed. The survey achieved 73.0% participation (n=1381). Presenting binocular distance vision ≥ 6/18 was achieved by 1223 participants. Another 79 had vision impaired by refractive error. Three of these were blind. At threshold 6/18, 204 participants had refractive error. Among these, 125 had spectacle-corrected presenting vision ≥ 6/18 ("met refractive error need"); 79 presented wearing no (n=74) or under-correcting (n=5) distance spectacles ("unmet refractive error need"). Presenting binocular near vision ≥ N8 was achieved by 833 participants. At threshold N8, 811 participants had presbyopia. Among these, 336 attained N8 with presenting near spectacles ("met presbyopia need"); 475 presented with no (n=402) or under-correcting (n=73) near spectacles ("unmet presbyopia need"). Rural residence was predictive of unmet refractive error (p=0.040) and presbyopia (p=0.016) need. Gender and household income source were not. Ethnicity-gender-age-domicile-adjusted to the Fiji population aged ≥ 40 years, "met refractive error need" was 10.3% (95% confidence interval [CI] 8.7-11.9%), "unmet refractive error need" was 4.8% (95%CI 3.6-5.9%), "refractive error correction coverage" was 68.3% (95%CI 54.4-82.2%),"met presbyopia need" was 24.6% (95%CI 22.4-26.9%), "unmet presbyopia need" was 33.8% (95%CI 31.3-36.3%), and "presbyopia correction coverage" was 42.2% (95%CI 37.6-46.8%). Fiji refraction and dispensing services should encourage uptake by rural dwellers and promote presbyopia correction. Lack of comparable data from neighbouring countries prevents a regional overview.
Coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyapa, Robert; Lowe, Matthew; Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester
Purpose: To evaluate the robustness of head and neck plans for treatment with intensity modulated proton therapy to range and setup errors, and to establish robustness parameters for the planning of future head and neck treatments. Methods and Materials: Ten patients previously treated were evaluated in terms of robustness to range and setup errors. Error bar dose distributions were generated for each plan, from which several metrics were extracted and used to define a robustness database of acceptable parameters over all analyzed plans. The patients were treated in sequentially delivered series, and plans were evaluated for both the first seriesmore » and for the combined error over the whole treatment. To demonstrate the application of such a database in the head and neck, for 1 patient, an alternative treatment plan was generated using a simultaneous integrated boost (SIB) approach and plans of differing numbers of fields. Results: The robustness database for the treatment of head and neck patients is presented. In an example case, comparison of single and multiple field plans against the database show clear improvements in robustness by using multiple fields. A comparison of sequentially delivered series and an SIB approach for this patient show both to be of comparable robustness, although the SIB approach shows a slightly greater sensitivity to uncertainties. Conclusions: A robustness database was created for the treatment of head and neck patients with intensity modulated proton therapy based on previous clinical experience. This will allow the identification of future plans that may benefit from alternative planning approaches to improve robustness.« less
Quantum Error Correction with a Globally-Coupled Array of Neutral Atom Qubits
2013-02-01
magneto - optical trap ) located at the center of the science cell. Fluorescence...Bottle beam trap GBA Gaussian beam array EMCCD electron multiplying charge coupled device microsec. microsecond MOT Magneto - optical trap QEC quantum error correction qubit quantum bit ...developed and implemented an array of neutral atom qubits in optical traps for studies of quantum error correction. At the end of the three year
ERIC Educational Resources Information Center
Teba, Sourou Corneille
2017-01-01
The aim of this paper is firstly, to make teachers correct thoroughly students' errors with effective strategies. Secondly, it is an attempt to find out if teachers are interested themselves in errors correction in Beninese secondary schools. Finally, I would like to point out the effective strategies that an EFL teacher can use for errors…
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirose, T; Arimura, H; Oga, S
2016-06-15
Purpose: The purpose of this study was to investigate the impact of planning target volume (PTV) margins with taking into consideration clinical target volume (CTV) shape variations on treatment plans of intensity modulated radiation therapy (IMRT) for prostate cancer. Methods: The systematic errors and the random errors for patient setup errors in right-left (RL), anterior-posterior (AP), and superior-inferior (SI) directions were obtained from data of 20 patients, and those for CTV shape variations were calculated from 10 patients, who were weekly scanned using cone beam computed tomography (CBCT). The setup error was defined as the difference in prostate centers betweenmore » planning CT and CBCT images after bone-based registrations. CTV shape variations of high, intermediate and low risk CTVs were calculated for each patient from variances of interfractional shape variations on each vertex of three-dimensional CTV point distributions, which were manually obtained from CTV contours on the CBCT images. PTV margins were calculated using the setup errors with and without CTV shape variations for each risk CTV. Six treatment plans were retrospectively made by using the PTV margins with and without CTV shape variations for the three risk CTVs of 5 test patients. Furthermore, the treatment plans were applied to CBCT images for investigating the impact of shape variations on PTV margins. Results: The percentages of population to cover with the PTV, which satisfies the CTV D98 of 95%, with and without the shape variations were 89.7% and 74.4% for high risk, 89.7% and 76.9% for intermediate risk, 84.6% and 76.9% for low risk, respectively. Conclusion: PTV margins taking into account CTV shape variation provide significant improvement of applicable percentage of population (P < 0.05). This study suggested that CTV shape variation should be taken consideration into determination of the PTV margins.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristophanous, M; Court, L
Purpose: Despite daily image guidance setup uncertainties can be high when treating large areas of the body. The aim of this study was to measure local uncertainties inside the PTV for patients receiving IMRT to the mediastinum region. Methods: Eleven lymphoma patients that received radiotherapy (breath-hold) to the mediastinum were included in this study. The treated region could range all the way from the neck to the diaphragm. Each patient had a CT scan with a CT-on-rails system prior to every treatment. The entire PTV region was matched to the planning CT using automatic rigid registration. The PTV was thenmore » split into 5 regions: neck, supraclavicular, superior mediastinum, upper heart, lower heart. Additional auto-registrations for each of the 5 local PTV regions were performed. The residual local setup errors were calculated as the difference between the final global PTV position and the individual final local PTV positions for the AP, SI and RL directions. For each patient 4 CT scans were analyzed (1 per week of treatment). Results: The residual mean group error (M) and standard deviation of the inter-patient (or systematic) error (Σ) were lowest in the RL direction of the superior mediastinum (0.0mm and 0.5mm) and highest in the RL direction of the lower heart (3.5mm and 2.9mm). The standard deviation of the inter-fraction (or random) error (σ) was lowest in the RL direction of the superior mediastinum (0.5mm) and highest in the SI direction of the lower heart (3.9mm) The directionality of local uncertainties is important; a superior residual error in the lower heart for example keeps it in the global PTV. Conclusion: There is a complex relationship between breath-holding and positioning uncertainties that needs further investigation. Residual setup uncertainties can be significant even under daily CT image guidance when treating large regions of the body.« less
SU-E-J-117: Verification Method for the Detection Accuracy of Automatic Winston Lutz Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, A; Chan, K; Fee, F
2014-06-01
Purpose: Winston Lutz test (WLT) has been a standard QA procedure performed prior to SRS treatment, to verify the mechanical iso-center setup accuracy upon different Gantry/Couch movements. Several detection algorithms exist,for analyzing the ball-radiation field alignment automatically. However, the accuracy of these algorithms have not been fully addressed. Here, we reveal the possible errors arise from each step in WLT, and verify the software detection accuracy with the Rectilinear Phantom Pointer (RLPP), a tool commonly used for aligning treatment plan coordinate with mechanical iso-center. Methods: WLT was performed with the radio-opaque ball mounted on a MIS and irradiated onto EDR2more » films. The films were scanned and processed with an in-house Matlab program for automatic iso-center detection. Tests were also performed to identify the errors arise from setup, film development and scanning process. The radioopaque ball was then mounted onto the RLPP, and offset laterally and longitudinally in 7 known positions ( 0, ±0.2, ±0.5, ±0.8 mm) manually for irradiations. The gantry and couch was set to zero degree for all irradiation. The same scanned images were processed repeatedly to check the repeatability of the software. Results: Miminal discrepancies (mean=0.05mm) were detected with 2 films overlapped and irradiated but developed separately. This reveals the error arise from film processor and scanner alone. Maximum setup errors were found to be around 0.2mm, by analyzing data collected from 10 irradiations over 2 months. For the known shift introduced using the RLPP, the results agree with the manual offset, and fit linearly (R{sup 2}>0.99) when plotted relative to the first ball with zero shift. Conclusion: We systematically reveal the possible errors arise from each step in WLT, and introduce a simple method to verify the detection accuracy of our in-house software using a clinically available tool.« less
SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montero, A Barragan; Sterpin, E; Lee, J
Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis
NASA Technical Reports Server (NTRS)
Carpenter, P.
2006-01-01
Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
"Ser" and "Estar": Corrective Input to Children's Errors of the Spanish Copula Verbs
ERIC Educational Resources Information Center
Holtheuer, Carolina; Rendle-Short, Johanna
2013-01-01
Evidence for the role of corrective input as a facilitator of language acquisition is inconclusive. Studies show links between corrective input and grammatical use of some, but not other, language structures. The present study examined relationships between corrective parental input and children's errors in the acquisition of the Spanish copula…
Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity
ERIC Educational Resources Information Center
Simmons-Mackie, Nina; Damico, Jack S.
2008-01-01
Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…
Error-correcting codes on scale-free networks
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Ko, Young-Jo
2004-06-01
We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
Errata report on Herbert Goldstein's Classical Mechanics: Second edition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.; Hoffman, F.M.
This report describes errors in Herbert Goldstein's textbook Classical Mechanics, Second Edition (Copyright 1980, ISBN 0-201-02918-9). Some of the errors in current printings of the text were corrected in the second printing; however, after communicating with Addison Wesley, the publisher for Classical Mechanics, it was discovered that the corrected galley proofs had been lost by the printer and that no one had complained of any errors in the eleven years since the second printing. The errata sheet corrects errors from all printings of the second edition.
Entanglement renormalization, quantum error correction, and bulk causality
NASA Astrophysics Data System (ADS)
Kim, Isaac H.; Kastoryano, Michael J.
2017-04-01
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
NASA Astrophysics Data System (ADS)
Vijayan, Rohan; Conley, Rebekah H.; Thompson, Reid C.; Clements, Logan W.; Miga, Michael I.
2016-03-01
Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict "brain shift" based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient's head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application's exported data was determined by comparing it to data acquired from the physical execution of the surgeon's plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient's head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.
Development of a 3-D Pen Input Device
2008-09-01
of a unistroke which can be written on any surface or in the air while correcting integration errors from the...navigation frame of a unistroke, which can be written on any surface or in the air while correcting integration errors from the measurements of the IMU... be written on any surface or in the air while correcting integration errors from the measurements of the IMU (Inertial Measurement Unit) of the
ERIC Educational Resources Information Center
Rice, Bart F.; Wilde, Carroll O.
It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…
Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horoshko, D B
2007-12-31
The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)
Towards integration of PET/MR hybrid imaging into radiation therapy treatment planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulus, Daniel H., E-mail: daniel.paulus@imp.uni-erlangen.de; Thorwath, Daniela; Schmidt, Holger
2014-07-15
Purpose: Multimodality imaging has become an important adjunct of state-of-the-art radiation therapy (RT) treatment planning. Recently, simultaneous PET/MR hybrid imaging has become clinically available and may also contribute to target volume delineation and biological individualization in RT planning. For integration of PET/MR hybrid imaging into RT treatment planning, compatible dedicated RT devices are required for accurate patient positioning. In this study, prototype RT positioning devices intended for PET/MR hybrid imaging are introduced and tested toward PET/MR compatibility and image quality. Methods: A prototype flat RT table overlay and two radiofrequency (RF) coil holders that each fix one flexible body matrixmore » RF coil for RT head/neck imaging have been evaluated within this study. MR image quality with the RT head setup was compared to the actual PET/MR setup with a dedicated head RF coil. PET photon attenuation and CT-based attenuation correction (AC) of the hardware components has been quantitatively evaluated by phantom scans. Clinical application of the new RT setup in PET/MR imaging was evaluated in anin vivo study. Results: The RT table overlay and RF coil holders are fully PET/MR compatible. MR phantom and volunteer imaging with the RT head setup revealed high image quality, comparable to images acquired with the dedicated PET/MR head RF coil, albeit with 25% reduced SNR. Repositioning accuracy of the RF coil holders was below 1 mm. PET photon attenuation of the RT table overlay was calculated to be 3.8% and 13.8% for the RF coil holders. With CT-based AC of the devices, the underestimation error was reduced to 0.6% and 0.8%, respectively. Comparable results were found within the patient study. Conclusions: The newly designed RT devices for hybrid PET/MR imaging are PET and MR compatible. The mechanically rigid design and the reproducible positioning allow for straightforward CT-based AC. The systematic evaluation within this study provides the technical basis for the clinical integration of PET/MR hybrid imaging into RT treatment planning.« less
Brahme, Anders; Nyman, Peter; Skatt, Björn
2008-05-01
A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow efficient image fusion between all imaging modalities employed.
Autonomous Quantum Error Correction with Application to Quantum Metrology
NASA Astrophysics Data System (ADS)
Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.
2017-04-01
We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1996-01-01
Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.
The new Heavy-ion MCP-based Ancillary Detector DANTE for the CLARA-PRISMA Setup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valiente-Dobon, J. J.; Gadea, A.; Corradi, L.
2006-08-14
The CLARA-PRISMA setup is a powerful tool for spectroscopic studies of neutron-rich nuclei produced in multi-nucleon transfer and deep-inelastic reactions. It combines the large acceptance spectrometer PRISMA with the {gamma}-ray array CLARA. At present, the ancillary heavy-ion detector DANTE, based on Micro-Channel Plates to be installed at the CLARA-PRISMA setup, is being constructed at LNL. DANTE will open the possibility of measuring {gamma}-{gamma} Doppler-corrected coincidences for the events outside the acceptance of PRISMA. In this presentation, it is described the heavy-ion detector DANTE, as well as the performances of the first prototype.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Small refractive errors--their correction and practical importance.
Skrbek, Matej; Petrová, Sylvie
2013-04-01
Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.
Wang, L; Turaka, A; Meyer, J; Spoka, D; Jin, L; Fan, J; Ma, C
2012-06-01
To assess the reliability of soft tissue alignment by comparing pre- and post-treatment cone-beam CT (CBCT) for image guidance in stereotactic body radiotherapy (SBRT) of lung cancers. Our lung SBRT procedures require all patients undergo 4D CT scan in order to obtain patient-specific target motion information through reconstructed 4D data using the maximum-intensity projection (MIP) algorithm. The internal target volume (ITV) was outlined directly from the MIP images and a 3-5 mm margin expansion was then applied to the ITV to create the PTV. Conformal treatment planning was performed on the helical images, to which the MIP images were fused. Prior to each treatment, CBCT was used for image guidance by comparing with the simulation CT and for patient relocalization based on the bony anatomy. Any displacement of the patient bony structure would be considered as setup errors and would be corrected by couch shifts. Theoretically, as the PTV definition included target internal motion, no further shifts other than setup corrections should be made. However, it is our practice to have treating physicians further check target localization within the PTV. Whenever the shifts based on the soft-tissue alignment (that is, target alignment) exceeded a certain value (e.g. 5 mm), a post-treatment CBCT was carried out to ensure that the tissue alignment is reliable by comparing between pre- and post-treatment CBCT. Pre- and post-CBCT has been performed for 7 patients so far who had shifts beyond 5 mm despite bony alignment. For all patients, post CBCT confirmed that the visualized target position was kept in the same position as before treatment after adjusting for soft-tissue alignment. For the patient population studied, it is shown that soft-tissue alignment is necessary and reliable in the lung SBRT for individual cases. © 2012 American Association of Physicists in Medicine.
Wang, Peng; Yin, Lingshu; Zhang, Yawei; Kirk, Maura; Song, Gang; Ahn, Peter H; Lin, Alexander; Gee, James; Dolney, Derek; Solberg, Timothy D; Maughan, Richard; McDonough, James; Teo, Boon-Keng Kevin
2016-03-08
The aim of this work is to demonstrate the feasibility of using water-equivalent thickness (WET) and virtual proton depth radiographs (PDRs) of intensity corrected cone-beam computed tomography (CBCT) to detect anatomical change and patient setup error to trigger adaptive head and neck proton therapy. The planning CT (pCT) and linear accelerator (linac) equipped CBCTs acquired weekly during treatment of a head and neck patient were used in this study. Deformable image registration (DIR) was used to register each CBCT with the pCT and map Hounsfield units (HUs) from the planning CT (pCT) onto the daily CBCT. The deformed pCT is referred as the corrected CBCT (cCBCT). Two dimensional virtual lateral PDRs were generated using a ray-tracing technique to project the cumulative WET from a virtual source through the cCBCT and the pCT onto a virtual plane. The PDRs were used to identify anatomic regions with large variations in the proton range between the cCBCT and pCT using a threshold of 3 mm relative difference of WET and 3 mm search radius criteria. The relationship between PDR differences and dose distribution is established. Due to weight change and tumor response during treatment, large variations in WETs were observed in the relative PDRs which corresponded spatially with an increase in the number of failing points within the GTV, especially in the pharynx area. Failing points were also evident near the posterior neck due to setup variations. Differences in PDRs correlated spatially to differences in the distal dose distribution in the beam's eye view. Virtual PDRs generated from volumetric data, such as pCTs or CBCTs, are potentially a useful quantitative tool in proton therapy. PDRs and WET analysis may be used to detect anatomical change from baseline during treatment and trigger further analysis in adaptive proton therapy.
Improve homology search sensitivity of PacBio data by correcting frameshifts.
Du, Nan; Sun, Yanni
2016-09-01
Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
ERIC Educational Resources Information Center
Sun, Wei; And Others
1992-01-01
Identifies types and distributions of errors in text produced by optical character recognition (OCR) and proposes a process using machine learning techniques to recognize and correct errors in OCR texts. Results of experiments indicating that this strategy can reduce human interaction required for error correction are reported. (25 references)…
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Patni, Nidhi; Burela, Nagarjuna; Pasricha, Rajesh; Goyal, Jaishree; Soni, Tej Prakash; Kumar, T Senthil; Natarajan, T
2017-01-01
To achieve the best possible therapeutic ratio using high-precision techniques (image-guided radiation therapy/volumetric modulated arc therapy [IGRT/VMAT]) of external beam radiation therapy in cases of carcinoma cervix using kilovoltage cone-beam computed tomography (kV-CBCT). One hundred and five patients of gynecological malignancies who were treated with IGRT (IGRT/VMAT) were included in the study. CBCT was done once a week for intensity-modulated radiation therapy and daily in IGRT/VMAT. These images were registered with the planning CT scan images and translational errors were applied and recorded. In all, 2078 CBCT images were studied. The margins of planning target volume were calculated from the variations in the setup. The setup variation was 5.8, 10.3, and 5.6 mm in anteroposterior, superoinferior, and mediolateral direction. This allowed adequate dose delivery to the clinical target volume and the sparing of organ at risks. Daily kV-CBCT is a satisfactory method of accurate patient positioning in treating gynecological cancers with high-precision techniques. This resulted in avoiding geographic miss.
Comparative evaluation of user interfaces for robot-assisted laser phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Becattini, Gabriele; Dellepiane, Massimo; Caldwell, Darwin G
2011-01-01
This research investigates the impact of three different control devices and two visualization methods on the precision, safety and ergonomics of a new medical robotic system prototype for assistive laser phonomicrosurgery. This system allows the user to remotely control the surgical laser beam using either a flight simulator type joystick, a joypad, or a pen display system in order to improve the traditional surgical setup composed by a mechanical micromanipulator coupled with a surgical microscope. The experimental setup and protocol followed to obtain quantitative performance data from the control devices tested are fully described here. This includes sets of path following evaluation experiments conducted with ten subjects with different skills, for a total of 700 trials. The data analysis method and experimental results are also presented, demonstrating an average 45% error reduction when using the joypad and up to 60% error reduction when using the pen display system versus the standard phonomicrosurgery setup. These results demonstrate the new system can provide important improvements in terms of surgical precision, ergonomics and safety. In addition, the evaluation method presented here is shown to support an objective selection of control devices for this application.
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
Early screening of an infant's visual system
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1999-06-01
It is of utmost importance to the development of the child's visual system that she perceives clear focused retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur--myopia and hyperopia can only cause important problems in the future when they are significantly large, however for the astigmatism (rather frequent in infants) and anisometropia the problems tend to be more stringent. The early evaluation of the visual status of human infants is thus of critical importance. Photorefraction is a convenient technique for this kind of subjects. Essentially a light beam is delivered into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The photorefraction setup we established using new technological breakthroughs on the fields of imaging devices, digital image processing and fiber optics, allows a fast noninvasive evaluation of children visual status (refractive errors, accommodation, strabismus, ...). Results of the visual screening of a group of risk' child descents of blinds or amblyopes will be presented.
Laser interrogation techniques for high-sensitivity strain sensing by fiber-Bragg-grating structures
NASA Astrophysics Data System (ADS)
Gagliardi, G.; Salza, M.; Ferraro, P.; De Natale, P.
2017-11-01
Novel interrogation methods for static and dynamic measurements of mechanical deformations by fiber Bragg-gratings (FBGs) structures are presented. The sensor-reflected radiation gives information on suffered strain, with a sensitivity dependent on the interrogation setup. Different approaches have been carried out, based on laser-frequency modulation techniques and near-IR lasers, to measure strain in single-FBG and in resonant high-reflectivity FBG arrays. In particular, for the fiber resonator, the laser frequency is actively locked to the cavity resonances by the Pound-Drever-Hall technique, thus tracking any frequency change due to deformations. The loop error and correction signals fed back to the laser are used as strain monitor. Sensitivity limits vary between 200 nɛ/√Hz in the quasi-static domain (0.5÷2 Hz), and between 1 and 4 nɛ/√Hz in the 0.4-1 kHz range for the single-FBG scheme, while strain down to 50 pɛ can be detected by using the laser-cavity-locked method.
NASA Astrophysics Data System (ADS)
Weber, Jonas H.; Kettler, Jan; Vural, Hüseyin; Müller, Markus; Maisch, Julian; Jetter, Michael; Portalupi, Simone L.; Michler, Peter
2018-05-01
As a fundamental building block for quantum computation and communication protocols, the correct verification of the two-photon interference (TPI) contrast between two independent quantum light sources is of utmost importance. Here, we experimentally demonstrate how frequently present blinking dynamics and changes in emitter brightness critically affect the Hong-Ou-Mandel-type (HOM) correlation histograms of remote TPI experiments measured via the commonly utilized setup configuration. We further exploit this qualitative and quantitative explanation of the observed correlation dynamics to establish an alternative interferometer configuration, which is overcoming the discussed temporal fluctuations, giving rise to an error-free determination of the remote TPI visibility. We prove full knowledge of the obtained correlation by reproducing the measured correlation statistics via Monte Carlo simulations. As an exemplary system, we make use of two pairs of remote semiconductor quantum dots; however, the same conclusions apply for TPI experiments with flying qubits from any kind of remote solid-state quantum emitters.
NASA Astrophysics Data System (ADS)
Liu, Wei; Yao, Kainan; Chen, Lu; Huang, Danian; Cao, Jingtai; Gu, Haijun
2018-03-01
Based-on the previous study on the theory of the sequential pyramid wavefront sensor (SPWFS), in this paper, the SPWFS is first applied to the coherent free space optical communications (FSOC) with more flexible spatial resolution and higher sensitivity than the Shack-Hartmann wavefront sensor, and with higher uniformity of intensity distribution and much simpler than the pyramid wavefront sensor. Then, the mixing efficiency (ME) and the bit error rate (BER) of the coherent FSOC are analyzed during the aberrations correction through numerical simulation with binary phase shift keying (BPSK) modulation. Finally, an experimental AO system based-on SPWFS is setup, and the experimental data is used to analyze the ME and BER of homodyne detection with BPSK modulation. The results show that the AO system based-on SPWFS can increase ME and decrease BER effectively. The conclusions of this paper provide a new method of wavefront sensing for designing the AO system for a coherent FSOC system.
Error analysis and correction of discrete solutions from finite element codes
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.
1984-01-01
Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.
NASA Astrophysics Data System (ADS)
Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.
2017-12-01
The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
Pálfalvi, László; Tóth, György; Tokodi, Levente; Márton, Zsuzsanna; Fülöp, József András; Almási, Gábor; Hebling, János
2017-11-27
A hybrid-type terahertz pulse source is proposed for high energy terahertz pulse generation. It is the combination of the conventional tilted-pulse-front setup and a transmission stair-step echelon-faced nonlinear crystal with a period falling in the hundred-micrometer range. The most important advantage of the setup is the possibility of using plane parallel nonlinear optical crystal for producing good-quality, symmetric terahertz beam. Another advantage of the proposed setup is the significant reduction of imaging errors, which is important in the case of wide pump beams that are used in high energy experiments. A one dimensional model was developed for determining the terahertz generation efficiency, and it was used for quantitative comparison between the proposed new hybrid setup and previously introduced terahertz sources. With lithium niobate nonlinear material, calculations predict an approximately ten-fold increase in the efficiency of the presently described hybrid terahertz pulse source with respect to that of the earlier proposed setup, which utilizes a reflective stair-step echelon and a prism shaped nonlinear optical crystal. By using pump pulses of 50 mJ pulse energy, 500 fs pulse length and 8 mm beam spot radius, approximately 1% conversion efficiency and 0.5 mJ terahertz pulse energy can be reached with the newly proposed setup.
Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel
2014-01-01
Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.
ERIC Educational Resources Information Center
Straalen-Sanderse, Wilma van; And Others
1986-01-01
Following an experiment which revealed that production of grammatically correct sentences and correction of grammatically problematic sentences in French are essentially different skills, a progressive training method for finding and correcting grammatical errors was developed. (MSE)
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.
Van, Anh T; Hernando, Diego; Sutton, Bradley P
2011-11-01
A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.
Classical simulation of quantum error correction in a Fibonacci anyon code
NASA Astrophysics Data System (ADS)
Burton, Simon; Brell, Courtney G.; Flammia, Steven T.
2017-02-01
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.
Adaptive optics system performance approximations for atmospheric turbulence correction
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-10-01
Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
Expert system for automatically correcting OCR output
NASA Astrophysics Data System (ADS)
Taghva, Kazem; Borsack, Julie; Condit, Allen
1994-03-01
This paper describes a new expert system for automatically correcting errors made by optical character recognition (OCR) devices. The system, which we call the post-processing system, is designed to improve the quality of text produced by an OCR device in preparation for subsequent retrieval from an information system. The system is composed of numerous parts: an information retrieval system, an English dictionary, a domain-specific dictionary, and a collection of algorithms and heuristics designed to correct as many OCR errors as possible. For the remaining errors that cannot be corrected, the system passes them on to a user-level editing program. This post-processing system can be viewed as part of a larger system that would streamline the steps of taking a document from its hard copy form to its usable electronic form, or it can be considered a stand alone system for OCR error correction. An earlier version of this system has been used to process approximately 10,000 pages of OCR generated text. Among the OCR errors discovered by this version, about 87% were corrected. We implement numerous new parts of the system, test this new version, and present the results.
NASA Astrophysics Data System (ADS)
Fraser, Danielle
In radiation therapy an uncertainty in the delivered dose always exists because anatomic changes are unpredictable and patient specific. Image guided radiation therapy (IGRT) relies on imaging in the treatment room to monitor the tumour and surrounding tissue to ensure their prescribed position in the radiation beam. The goal of this thesis was to determine the dosimetric impact on the misaligned radiation therapy target for three cancer sites due to common setup errors; organ motion, tumour tissue deformation, changes in body habitus, and treatment planning errors. For this purpose, a novel 3D ultrasound system (Restitu, Resonant Medical, Inc.) was used to acquire a reference image of the target in the computed tomography simulation room at the time of treatment planning, to acquire daily images in the treatment room at the time of treatment delivery, and to compare the daily images to the reference image. The measured differences in position and volume between daily and reference geometries were incorporated into Monte Carlo (MC) dose calculations. The EGSnrc (National Research Council, Canada) family of codes was used to model Varian linear accelerators and patient specific beam parameters, as well as to estimate the dose to the target and organs at risk under several different scenarios. After validating the necessity of MC dose calculations in the pelvic region, the impact of interfraction prostate motion, and subsequent patient realignment under the treatment beams, on the delivered dose was investigated. For 32 patients it is demonstrated that using 3D conformal radiation therapy techniques and a 7 mm margin, the prescribed dose to the prostate, rectum, and bladder is recovered within 0.5% of that planned when patient setup is corrected for prostate motion, despite the beams interacting with a new external surface and internal tissue boundaries. In collaboration with the manufacturer, the ultrasound system was adapted from transabdominal imaging to neck imaging. Two case studies of nasopharyngeal cancer are discussed. The deformation of disease-positive cervical lymph nodes was monitored throughout treatment. Node volumes shrunk to 17% of the initial volume, moved up 1.3 cm, and received up to a 12% lower dose than that prescribed. It is shown that difficulties in imaging soft tissue in the neck region are circumvented with ultrasound imaging, and after dosimetric verification it is argued that adaptive replanning may be more beneficial than patient realignment when intensity modulated radiation therapy techniques are used. Some of the largest dose delivery errors were found in external electron beam treatments for breast cancer patients who underwent breast conserving surgery. Inaccuracies in conventional treatment planning resulted in substantial target dose discrepancies of up to 88%. When patient setup errors, interfraction tumour bed motion, and tissue remodeling were considered, inadequate target coverage was exacerbated. This thesis quantifies the dose discrepancy between that prescribed and that delivered. I delve into detail for common IGRT treatment sites, and illuminate problems that have not received much attention for less common IGRT treatment sites.
A NEW GUI FOR GLOBAL ORBIT CORRECTION AT THE ALS USING MATLAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pachikara, J.; Portmann, G.
2007-01-01
Orbit correction is a vital procedure at particle accelerators around the world. The orbit correction routine currently used at the Advanced Light Source (ALS) is a bit cumbersome and a new Graphical User Interface (GUI) has been developed using MATLAB. The correction algorithm uses a singular value decomposition method for calculating the required corrector magnet changes for correcting the orbit. The application has been successfully tested at the ALS. The GUI display provided important information regarding the orbit including the orbit errors before and after correction, the amount of corrector magnet strength change, and the standard deviation of the orbitmore » error with respect to the number of singular values used. The use of more singular values resulted in better correction of the orbit error but at the expense of enormous corrector magnet strength changes. The results showed an inverse relationship between the peak-to-peak values of the orbit error and the number of singular values used. The GUI interface helps the ALS physicists and operators understand the specifi c behavior of the orbit. The application is convenient to use and is a substantial improvement over the previous orbit correction routine in terms of user friendliness and compactness.« less
MeCorS: Metagenome-enabled error correction of single cell sequencing reads
Bremges, Andreas; Singer, Esther; Woyke, Tanja; ...
2016-03-15
Here we present a new tool, MeCorS, to correct chimeric reads and sequencing errors in Illumina data generated from single amplified genomes (SAGs). It uses sequence information derived from accompanying metagenome sequencing to accurately correct errors in SAG reads, even from ultra-low coverage regions. In evaluations on real data, we show that MeCorS outperforms BayesHammer, the most widely used state-of-the-art approach. MeCorS performs particularly well in correcting chimeric reads, which greatly improves both accuracy and contiguity of de novo SAG assemblies.
Digital Mirror Device Application in Reduction of Wave-front Phase Errors
Zhang, Yaping; Liu, Yan; Wang, Shuxue
2009-01-01
In order to correct the image distortion created by the mixing/shear layer, creative and effectual correction methods are necessary. First, a method combining adaptive optics (AO) correction with a digital micro-mirror device (DMD) is presented. Second, performance of an AO system using the Phase Diverse Speckle (PDS) principle is characterized in detail. Through combining the DMD method with PDS, a significant reduction in wavefront phase error is achieved in simulations and experiments. This kind of complex correction principle can be used to recovery the degraded images caused by unforeseen error sources. PMID:22574016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Derek; Mutanga, Theodore
Purpose: An end-to-end testing methodology was designed to evaluate the overall SRS treatment fidelity, incorporating all steps in the linac-based frameless radiosurgery treatment delivery process. The study details our commissioning experience of the Steev (CIRS, Norfolk, VA) stereotactic anthropomorphic head phantom including modification, test design, and baseline measurements. Methods: Repeated MR and CT scans were performed with interchanging inserts. MR-CT fusion accuracy was evaluated and the insert spatial coincidence was verified on CT. Five non-coplanar arcs delivered a prescription dose to a 15 mm spherical CTV with 2 mm PTV margin. Following setup, CBCT-based shifts were applied as per protocol.more » Sequential measurements were performed by interchanging inserts without disturbing the setup. Spatial and dosimetric accuracy was assessed by a combination of CBCT hidden target, radiochromic film, and ion chamber measurements. To facilitate film registration, the film insert was modified in-house by etching marks. Results: MR fusion error and insert spatial coincidences were within 0.3 mm. Both CBCT and film measurements showed spatial displacements of 1.0 mm in similar directions. Both coronal and sagittal films reported 2.3 % higher target dose relative to the treatment plan. The corrected ion chamber measurement was similarly greater by 1.0 %. The 3 %/2 mm gamma pass rate was 99% for both films Conclusions: A comprehensive end-to-end testing methodology was implemented for our SRS QA program. The Steev phantom enabled realistic evaluation of the entire treatment process. Overall spatial and dosimetric accuracy of the delivery were 1 mm and 3 % respectively.« less
Eguizabal, Johnny; Tufaga, Michael; Scheer, Justin K; Ames, Christopher; Lotz, Jeffrey C; Buckley, Jenni M
2010-05-07
In vitro multi-axial bending testing using pure moment loading conditions has become the standard in evaluating the effects of different types of surgical intervention on spinal kinematics. Simple, cable-driven experimental set-ups have been widely adopted because they require little infrastructure. Traditionally, "fixed ring" cable-driven experimental designs have been used; however, there have been concerns with the validity of this set-up in applying pure moment loading. This study involved directly comparing the loading state induced by a traditional "fixed ring" apparatus versus a novel "sliding ring" approach. Flexion-extension bending was performed on an artificial spine model and a single cadaveric test specimen, and the applied loading conditions to the specimen were measured with an in-line multiaxial load cell. The results showed that the fixed ring system applies flexion-extension moments that are 50-60% less than the intended values. This design also imposes non-trivial anterior-posterior shear forces, and non-uniform loading conditions were induced along the length of the specimen. The results of this study indicate that fixed ring systems have the potential to deviate from a pure moment loading state and that our novel sliding ring modification corrects this error in the original test design. This suggests that the proposed sliding ring design should be used for future in vitro spine biomechanics studies involving a cable-driven pure moment apparatus. Copyright 2010 Elsevier Ltd. All rights reserved.
Corrections of clinical chemistry test results in a laboratory information system.
Wang, Sihe; Ho, Virginia
2004-08-01
The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.
Experimental assessment of a 3-D plenoptic endoscopic imaging system.
Le, Hanh N D; Decker, Ryan; Krieger, Axel; Kang, Jin U
2017-01-01
An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second.
Experimental assessment of a 3-D plenoptic endoscopic imaging system
Le, Hanh N. D.; Decker, Ryan; Krieger, Axel; Kang, Jin U.
2017-01-01
An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second. PMID:29449863
Bedi, Harleen; Goltz, Herbert C; Wong, Agnes M F; Chandrakumar, Manokaraananthan; Niechwiej-Szwedo, Ewa
2013-01-01
Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary "corrective" saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
Radiative corrections to elastic proton-electron scattering measured in coincidence
NASA Astrophysics Data System (ADS)
Gakh, G. I.; Konchatnij, M. I.; Merenkov, N. P.; Tomasi-Gustafsson, E.
2017-05-01
The differential cross section for elastic scattering of protons on electrons at rest is calculated, taking into account the QED radiative corrections to the leptonic part of interaction. These model-independent radiative corrections arise due to emission of the virtual and real soft and hard photons as well as to vacuum polarization. We analyze an experimental setup when both the final particles are recorded in coincidence and their energies are determined within some uncertainties. The kinematics, the cross section, and the radiative corrections are calculated and numerical results are presented.
Creating illusions of knowledge: learning errors that contradict prior knowledge.
Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J
2013-02-01
Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved
Physical fault tolerance of nanoelectronics.
Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N
2011-04-29
The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.
Loss Tolerance in One-Way Quantum Computation via Counterfactual Error Correction
NASA Astrophysics Data System (ADS)
Varnava, Michael; Browne, Daniel E.; Rudolph, Terry
2006-09-01
We introduce a scheme for fault tolerantly dealing with losses (or other “leakage” errors) in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively using an adaptive strategy of measurement—no coherent measurements or coherent correction is required. Since the scheme relies on inferring information about what would have been the outcome of a measurement had one been able to carry it out, we call this counterfactual error correction.
Passig, Johannes; Zherebtsov, Sergey; Irsig, Robert; Arbeiter, Mathias; Peltz, Christian; Göde, Sebastian; Skruszewicz, Slawomir; Meiwes-Broer, Karl-Heinz; Tiggesbäumker, Josef; Kling, Matthias F; Fennel, Thomas
2018-02-07
The original PDF version of this Article contained an error in Equation 1. The original HTML version of this Article contained errors in Equation 2 and Equation 4. These errors have now been corrected in both the PDF and the HTML versions of the Article.
Cryosat-2 and Sentinel-3 tropospheric corrections: their evaluation over rivers and lakes
NASA Astrophysics Data System (ADS)
Fernandes, Joana; Lázaro, Clara; Vieira, Telmo; Restano, Marco; Ambrózio, Américo; Benveniste, Jérôme
2017-04-01
In the scope of the Sentinel-3 Hydrologic Altimetry PrototypE (SHAPE) project, errors that presently affect the tropospheric corrections i.e. dry and wet tropospheric corrections (DTC and WTC, respectively) given in satellite altimetry products are evaluated over inland water regions. These errors arise because both corrections, function of altitude, are usually computed with respect to an incorrect altitude reference. Several regions of interest (ROI) where CryoSat-2 (CS-2) is operating in SAR/SAR-In modes were selected for this evaluation. In this study, results for Danube River, Amazon Basin, Vanern and Titicaca lakes, and Caspian Sea, using Level 1B CS-2 data, are shown. DTC and WTC have been compared to those derived from ECMWF Operational model and computed at different altitude references: i) ECMWF orography; ii) ACE2 (Altimeter Corrected Elevations 2) and GWD-LR (Global Width Database for Large Rivers) global digital elevation models; iii) mean lake level, derived from Envisat mission data, or river profile derived in the scope of SHAPE project by AlongTrack (ATK) using Jason-2 data. Whenever GNSS data are available in the ROI, a GNSS-derived WTC was also generated and used for comparison. Overall, results show that the tropospheric corrections present in CS-2 L1B products are provided at the level of ECMWF orography, which can depart from the mean lake level or river profile by hundreds of metres. Therefore, the use of the model orography originates errors in the corrections. To mitigate these errors, both DTC and WTC should be provided at the mean river profile/lake level. For example, for the Caspian Sea with a mean level of -27 m, the tropospheric corrections provided in CS-2 products were computed at mean sea level (zero level), leading therefore to a systematic error in the corrections. In case a mean lake level is not available, it can be easily determined from satellite altimetry. In the absence of a mean river profile, both mentioned DEM, considered better altimetric surfaces when compared to the ECMWF orography, can be used. When using the model orography, systematic errors up to 3-5 cm are found in the DTC for most of the selected regions, which can induce significant errors in e.g. the determination of mean river profiles or lake level time series. For the Danube River, larger DTC errors up to 10 cm, due to terrain characteristics, can appear. For the WTC, with higher spatial variability, model errors of magnitude 1-3 cm are expected over inland waters. In the Danube region, the comparison of GNSS- and ECMWF-derived WTC has shown that the error in the WTC computed at orography level can be up to 3 cm. WTC errors with this magnitude have been found for all ROI. Although globally small, these errors are systematic and must be corrected prior to the generation of CS-2 Level 2 products. Once computed at the mean profile and mean lake level, the results show that tropospheric corrections have accuracy better than 1 cm. This analysis is currently being extended to S3 data and the first results are shown.
Response to Request for Correction 12002
Response to Artisan EHS Consulting's Request for Correction 12002 regarding notification requirements for hazardous substances, notifying that the error in question was a typographical error and has been fixed.
Ichikawa, Tamaki; Kitanosono, Takashi; Koizumi, Jun; Ogushi, Yoichi; Tanaka, Osamu; Endo, Jun; Hashimoto, Takeshi; Kawada, Shuichi; Saito, Midori; Kobayashi, Makiko; Imai, Yutaka
2007-12-20
We evaluated the usefulness of radiological reporting that combines continuous speech recognition (CSR) and error correction by transcriptionists. Four transcriptionists (two with more than 10 years' and two with less than 3 months' transcription experience) listened to the same 100 dictation files and created radiological reports using conventional transcription and a method that combined CSR with manual error correction by the transcriptionists. We compared the 2 groups using the 2 methods for accuracy and report creation time and evaluated the transcriptionists' inter-personal dependence on accuracy rate and report creation time. We used a CSR system that did not require the training of the system to recognize the user's voice. We observed no significant difference in accuracy between the 2 groups and 2 methods that we tested, though transcriptionists with greater experience transcribed faster than those with less experience using conventional transcription. Using the combined method, error correction speed was not significantly different between two groups of transcriptionists with different levels of experience. Combining CSR and manual error correction by transcriptionists enabled convenient and accurate radiological reporting.
Global embedding of fibre inflation models
NASA Astrophysics Data System (ADS)
Cicoli, Michele; Muia, Francesco; Shukla, Pramod
2016-11-01
We present concrete embeddings of fibre inflation models in globally consistent type IIB Calabi-Yau orientifolds with closed string moduli stabilisation. After performing a systematic search through the existing list of toric Calabi-Yau manifolds, we find several examples that reproduce the minimal setup to embed fibre inflation models. This involves Calabi-Yau manifolds with h 1,1 = 3 which are K3 fibrations over a ℙ1 base with an additional shrinkable rigid divisor. We then provide different consistent choices of the underlying brane set-up which generate a non-perturbative superpotential suitable for moduli stabilisation and string loop corrections with the correct form to drive inflation. For each Calabi-Yau orientifold setting, we also compute the effect of higher derivative contributions and study their influence on the inflationary dynamics.
NASA Astrophysics Data System (ADS)
Sánchez-Doblado, Francisco; Capote, Roberto; Leal, Antonio; Roselló, Joan V.; Lagares, Juan I.; Arráns, Rafael; Hartmann, Günther H.
2005-03-01
Intensity modulated radiotherapy (IMRT) has become a treatment of choice in many oncological institutions. Small fields or beamlets with sizes of 1 to 5 cm2 are now routinely used in IMRT delivery. Therefore small ionization chambers (IC) with sensitive volumes <=0.1 cm3are generally used for dose verification of an IMRT treatment. The measurement conditions during verification may be quite different from reference conditions normally encountered in clinical beam calibration, so dosimetry of these narrow photon beams pertains to the so-called non-reference conditions for beam calibration. This work aims at estimating the error made when measuring the organ at risk's (OAR) absolute dose by a micro ion chamber (μIC) in a typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. We have selected two clinical cases, treated by IMRT, for our dose error evaluations. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the dose measured by the chamber as a dose averaged over the air cavity within the ion-chamber active volume (Dair). The absorbed dose to water (Dwater) is derived as the dose deposited inside the same volume, in the same geometrical position, filled and surrounded by water in the absence of the ion chamber. Therefore, the Dwater/Dair dose ratio is the MC estimator of the total correction factor needed to convert the absorbed dose in air into the absorbed dose in water. The dose ratio was calculated for the μIC located at the isocentre within the OARs for both clinical cases. The clinical impact of the calculated dose error was found to be negligible for the studied IMRT treatments.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt
2004-08-01
It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.
Length matters: Improved high field EEG-fMRI recordings using shorter EEG cables.
Assecondi, Sara; Lavallee, Christina; Ferrari, Paolo; Jovicich, Jorge
2016-08-30
The use of concurrent EEG-fMRI recordings has increased in recent years, allowing new avenues of medical and cognitive neuroscience research; however, currently used setups present problems with data quality and reproducibility. We propose a compact experimental setup for concurrent EEG-fMRI at 4T and compare it to a more standard reference setup. The compact setup uses short EEG cables connecting to the amplifiers, which are placed right at the back of the head RF coil on a form-fitting extension force-locked to the patient MR bed. We compare the two setups in terms of sensitivity to MR-room environmental noise, interferences between measuring devices (EEG or fMRI), and sensitivity to functional responses in a visual stimulation paradigm. The compact setup reduces the system sensitivity to both external noise and MR-induced artefacts by at least 60%, with negligible EEG noise induced from the mechanical vibrations of the cryogenic cooling compression pump. The compact setup improved EEG data quality and the overall performance of MR-artifact correction techniques. Both setups were similar in terms of the fMRI data, with higher reproducibility for cable placement within the scanner in the compact setup. This improved compact setup may be relevant to MR laboratories interested in reducing the sensitivity of their EEG-fMRI experimental setup to external noise sources, setting up an EEG-fMRI workplace for the first time, or for creating a more reproducible configuration of equipment and cables. Implications for safety and ergonomics are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Weiss, D. M.
1981-01-01
Error data obtained from two different software development environments are compared. To obtain data that was complete, accurate, and meaningful, a goal-directed data collection methodology was used. Changes made to software were monitored concurrently with its development. Similarities common to both environments are included: (1) the principal error was in the design and implementation of single routines; (2) few errors were the result of changes, required more than one attempt to correct, and resulted in other errors; (3) relatively few errors took more than a day to correct.
Quantum error correction assisted by two-way noisy communication
Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C. H.
2014-01-01
Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1
Quantum error correction assisted by two-way noisy communication.
Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C H
2014-11-26
Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
A gamma-ray testing technique for spacecraft. [considering cosmic radiation effects
NASA Technical Reports Server (NTRS)
Gribov, B. S.; Repin, N. N.; Sakovich, V. A.; Sakharov, V. M.
1977-01-01
The simulated cosmic radiation effect on a spacecraft structure is evaluated by gamma ray testing in relation to structural thickness. A drawing of the test set-up is provided and measurement errors are discussed.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Yingying, Zhang; Jiancheng, Lai; Cheng, Yin; Zhenhua, Li
2009-03-01
The dependence of the surface plasmon resonance (SPR) phase difference curve on the complex refractive index of a sample in Kretschmann configuration is discussed comprehensively, based on which a new method is proposed to measure the complex refractive index of turbid liquid. A corresponding experiment setup was constructed to measure the SPR phase difference curve, and the complex refractive index of turbid liquid was determined. By using the setup, the complex refractive indices of Intralipid solutions with concentrations of 5%, 10%, 15%, and 20% are obtained to be 1.3377+0.0005 i, 1.3427+0.0028 i, 1.3476+0.0034 i, and 1.3496+0.0038 i, respectively. Furthermore, the error analysis indicates that the root-mean-square errors of both the real and the imaginary parts of the measured complex refractive index are less than 5x10(-5).
ERIC Educational Resources Information Center
Santos, Maria; Lopez-Serrano, Sonia; Manchon, Rosa M.
2010-01-01
Framed in a cognitively-oriented strand of research on corrective feedback (CF) in SLA, the controlled three-stage (composition/comparison-noticing/revision) study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation) on noticing and uptake, as evidenced in the written output produced by a…
Evaluating diffraction-based overlay
NASA Astrophysics Data System (ADS)
Li, Jie; Tan, Asher; Jung, JinWoo; Goelzer, Gary; Smith, Nigel; Hu, Jiangtao; Ham, Boo-Hyun; Kwak, Min-Cheol; Kim, Cheol-Hong; Nam, Suk-Woo
2012-03-01
We evaluate diffraction-based overlay (DBO) metrology using two test wafers. The test wafers have different film stacks designed to test the quality of DBO data under a range of film conditions. We present DBO results using traditional empirical approach (eDBO). eDBO relies on linear response of the reflectance with respect to the overlay displacement within a small range. It requires specially designed targets that consist of multiple pads with programmed shifts. It offers convenience of quick recipe setup since there is no need to establish a model. We measure five DBO targets designed with different pitches and programmed shifts. The correlations of five eDBO targets and the correlation of eDBO to image-based overlay are excellent. The targets of 800nm and 600nm pitches have better dynamic precision than targets of 400nm pitch, which agrees with simulated results on signal/noise ratio. 3σ of less than 0.1nm is achieved for both wafers using the best configured targets. We further investigate the linearity assumption of eDBO algorithm. Simulation results indicate that as the pitch of DBO targets gets smaller, the nonlinearity error, i.e., the error in the overlay measurement results caused by deviation from ideal linear response, becomes bigger. We propose a nonlinearity correction (NLC) by including higher order terms in the optical response. The new algorithm with NLC improves measurement consistency for DBO targets of same pitch but different programmed shift, due to improved accuracy. The results from targets with different pitches, however, are improved marginally, indicating the presence of other error sources.
Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling
NASA Astrophysics Data System (ADS)
Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang
2018-04-01
Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.
How do Stability Corrections Perform in the Stable Boundary Layer Over Snow?
NASA Astrophysics Data System (ADS)
Schlögl, Sebastian; Lehning, Michael; Nishimura, Kouichi; Huwald, Hendrik; Cullen, Nicolas J.; Mott, Rebecca
2017-10-01
We assess sensible heat-flux parametrizations in stable conditions over snow surfaces by testing and developing stability correction functions for two alpine and two polar test sites. Five turbulence datasets are analyzed with respect to, (a) the validity of the Monin-Obukhov similarity theory, (b) the model performance of well-established stability corrections, and (c) the development of new univariate and multivariate stability corrections. Using a wide range of stability corrections reveals an overestimation of the turbulent sensible heat flux for high wind speeds and a generally poor performance of all investigated functions for large temperature differences between snow and the atmosphere above (>10 K). Applying the Monin-Obukhov bulk formulation introduces a mean absolute error in the sensible heat flux of 6 W m^{-2} (compared with heat fluxes calculated directly from eddy covariance). The stability corrections produce an additional error between 1 and 5 W m^{-2}, with the smallest error for published stability corrections found for the Holtslag scheme. We confirm from previous studies that stability corrections need improvements for large temperature differences and wind speeds, where sensible heat fluxes are distinctly overestimated. Under these atmospheric conditions our newly developed stability corrections slightly improve the model performance. However, the differences between stability corrections are typically small when compared to the residual error, which stems from the Monin-Obukhov bulk formulation.
Observations on Polar Coding with CRC-Aided List Decoding
2016-09-01
9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omkar, S.; Srikanth, R., E-mail: srik@poornaprajna.org; Banerjee, Subhashish
A protocol based on quantum error correction based characterization of quantum dynamics (QECCD) is developed for quantum process tomography on a two-qubit system interacting dissipatively with a vacuum bath. The method uses a 5-qubit quantum error correcting code that corrects arbitrary errors on the first two qubits, and also saturates the quantum Hamming bound. The dissipative interaction with a vacuum bath allows for both correlated and independent noise on the two-qubit system. We study the dependence of the degree of the correlation of the noise on evolution time and inter-qubit separation.
Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric
2009-01-01
The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.
ERIC Educational Resources Information Center
Clayman, Deborah P. Goldweber
The ability of 100 second-grade boys and girls to self-correct oral reading errors was studied in relationship to visual-form perception, phonic skills, response speed, and reading level. Each child was tested individually with the Bender-Error Test, the Gray Oral Paragraphs, and the Roswell-Chall Diagnostic Reading Test and placed into a group of…
Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes
NASA Astrophysics Data System (ADS)
Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team
We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.
NASA Astrophysics Data System (ADS)
Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo
2017-01-01
We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.
Reversal of photon-scattering errors in atomic qubits.
Akerman, N; Kotler, S; Glickman, Y; Ozeri, R
2012-09-07
Spontaneous photon scattering by an atomic qubit is a notable example of environment-induced error and is a fundamental limit to the fidelity of quantum operations. In the scattering process, the qubit loses its distinctive and coherent character owing to its entanglement with the photon. Using a single trapped ion, we show that by utilizing the information carried by the photon, we are able to coherently reverse this process and correct for the scattering error. We further used quantum process tomography to characterize the photon-scattering error and its correction scheme and demonstrate a correction fidelity greater than 85% whenever a photon was measured.
Strain gage measurement errors in the transient heating of structural components
NASA Technical Reports Server (NTRS)
Richards, W. Lance
1993-01-01
Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.
Automatic Correction of Adverb Placement Errors for CALL
ERIC Educational Resources Information Center
Garnier, Marie
2012-01-01
According to recent studies, there is a persistence of adverb placement errors in the written productions of francophone learners and users of English at an intermediate to advanced level. In this paper, we present strategies for the automatic detection and correction of errors in the placement of manner adverbs, using linguistic-based natural…
Controlling qubit drift by recycling error correction syndromes
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin
2015-03-01
Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE
On the robustness of bucket brigade quantum RAM
NASA Astrophysics Data System (ADS)
Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa
2015-12-01
We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.
Shabbott, Britne A; Sainburg, Robert L
2010-05-01
Visuomotor adaptation is mediated by errors between intended and sensory-detected arm positions. However, it is not clear whether visual-based errors that are shown during the course of motion lead to qualitatively different or more efficient adaptation than errors shown after movement. For instance, continuous visual feedback mediates online error corrections, which may facilitate or inhibit the adaptation process. We addressed this question by manipulating the timing of visual error information and task instructions during a visuomotor adaptation task. Subjects were exposed to a visuomotor rotation, during which they received continuous visual feedback (CF) of hand position with instructions to correct or not correct online errors, or knowledge-of-results (KR), provided as a static hand-path at the end of each trial. Our results showed that all groups improved performance with practice, and that online error corrections were inconsequential to the adaptation process. However, in contrast to the CF groups, the KR group showed relatively small reductions in mean error with practice, increased inter-trial variability during rotation exposure, and more limited generalization across target distances and workspace. Further, although the KR group showed improved performance with practice, after-effects were minimal when the rotation was removed. These findings suggest that simultaneous visual and proprioceptive information is critical in altering neural representations of visuomotor maps, although delayed error information may elicit compensatory strategies to offset perturbations.
The Influence of Radiosonde 'Age' on TRMM Field Campaign Soundings Humidity Correction
NASA Technical Reports Server (NTRS)
Roy, Biswadev; Halverson, Jeffrey B.; Wang, Jun-Hong
2002-01-01
Hundreds of Vaisala sondes with a RS80-H Humicap thin-film capacitor humidity sensor were launched during the Tropical Rainfall Measuring Mission (TRMM) field campaigns in Large Scale Biosphere-Atmosphere held in Brazil (LBA) and in Kwajalein experiment (KWAJEX) held in the Republic of Marshall Islands. Using Six humidity error correction algorithms by Wang et al., these sondes were corrected for significant dry bias in the RS80-H data. It is further shown that sonde surface temperature error must be corrected for a better representation of the relative humidity. This error becomes prominent due to sensor arm-heating in the first 50-s data.
Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.
Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris
2010-07-15
The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net
Error suppression and correction for quantum annealing
NASA Astrophysics Data System (ADS)
Lidar, Daniel
While adiabatic quantum computing and quantum annealing enjoy a certain degree of inherent robustness against excitations and control errors, there is no escaping the need for error correction or suppression. In this talk I will give an overview of our work on the development of such error correction and suppression methods. We have experimentally tested one such method combining encoding, energy penalties and decoding, on a D-Wave Two processor, with encouraging results. Mean field theory shows that this can be explained in terms of a softening of the closing of the gap due to the energy penalty, resulting in protection against excitations that occur near the quantum critical point. Decoding recovers population from excited states and enhances the success probability of quantum annealing. Moreover, we have demonstrated that using repetition codes with increasing code distance can lower the effective temperature of the annealer. References: K.L. Pudenz, T. Albash, D.A. Lidar, ``Error corrected quantum annealing with hundreds of qubits'', Nature Commun. 5, 3243 (2014). K.L. Pudenz, T. Albash, D.A. Lidar, ``Quantum annealing correction for random Ising problems'', Phys. Rev. A. 91, 042302 (2015). S. Matsuura, H. Nishimori, T. Albash, D.A. Lidar, ``Mean Field Analysis of Quantum Annealing Correction''. arXiv:1510.07709. W. Vinci et al., in preparation.
APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study
NASA Astrophysics Data System (ADS)
Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak
2017-04-01
In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.
Decroos, Francis Char; Stinnett, Sandra S; Heydary, Cynthia S; Burns, Russell E; Jaffe, Glenn J
2013-11-01
To determine the impact of segmentation error correction and precision of standardized grading of time domain optical coherence tomography (OCT) scans obtained during an interventional study for macular edema secondary to central retinal vein occlusion (CRVO). A reading center team of two readers and a senior reader evaluated 1199 OCT scans. Manual segmentation error correction (SEC) was performed. The frequency of SEC, resulting change in central retinal thickness after SEC, and reproducibility of SEC were quantified. Optical coherence tomography characteristics associated with the need for SECs were determined. Reading center teams graded all scans, and the reproducibility of this evaluation for scan quality at the fovea and cystoid macular edema was determined on 97 scans. Segmentation errors were observed in 360 (30.0%) scans, of which 312 were interpretable. On these 312 scans, the mean machine-generated central subfield thickness (CST) was 507.4 ± 208.5 μm compared to 583.0 ± 266.2 μm after SEC. Segmentation error correction resulted in a mean absolute CST correction of 81.3 ± 162.0 μm from baseline uncorrected CST. Segmentation error correction was highly reproducible (intraclass correlation coefficient [ICC] = 0.99-1.00). Epiretinal membrane (odds ratio [OR] = 2.3, P < 0.0001), subretinal fluid (OR = 2.1, P = 0.0005), and increasing CST (OR = 1.6 per 100-μm increase, P < 0.001) were associated with need for SEC. Reading center teams reproducibly graded scan quality at the fovea (87% agreement, kappa = 0.64, 95% confidence interval [CI] 0.45-0.82) and cystoid macular edema (92% agreement, kappa = 0.84, 95% CI 0.74-0.94). Optical coherence tomography images obtained during an interventional CRVO treatment trial can be reproducibly graded. Segmentation errors can cause clinically meaningful deviation in central retinal thickness measurements; however, these errors can be corrected reproducibly in a reading center setting. Segmentation errors are common on these images, can cause clinically meaningful errors in central retinal thickness measurement, and can be corrected reproducibly in a reading center setting.
Older, Not Younger, Children Learn More False Facts from Stories
ERIC Educational Resources Information Center
Fazio, Lisa K.; Marsh, Elizabeth J.
2008-01-01
Early school-aged children listened to stories that contained correct and incorrect facts. All ages answered more questions correctly after having heard the correct fact in the story. Only the older children, however, produced story errors on a later general knowledge test. Source errors did not drive the increased suggestibility in older…
Code of Federal Regulations, 2010 CFR
2010-01-01
... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) CORRECTION OF RETIREMENT COVERAGE ERRORS UNDER THE FEDERAL ERRONEOUS RETIREMENT COVERAGE CORRECTIONS ACT... if your qualifying retirement coverage error was previously corrected to FERS, and you later received...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
... Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: November 9...: Correction notice. SUMMARY: This document corrects a technical error that appeared in the notice published in... of July 22, 2010 (75 FR 42836), there was a technical error that we are identifying and correcting in...
Quantum error correction of continuous-variable states against Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralph, T. C.
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
Artificial Intelligence and Second Language Learning: An Efficient Approach to Error Remediation
ERIC Educational Resources Information Center
Dodigovic, Marina
2007-01-01
While theoretical approaches to error correction vary in the second language acquisition (SLA) literature, most sources agree that such correction is useful and leads to learning. While some point out the relevance of the communicative context in which the correction takes place, others stress the value of consciousness-raising. Trying to…
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Beam hardening correction in CT myocardial perfusion measurement
NASA Astrophysics Data System (ADS)
So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim
2009-05-01
This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.
Desplanques, Maxime; Tagaste, Barbara; Fontana, Giulia; Pella, Andrea; Riboldi, Marco; Fattori, Giovanni; Donno, Andrea; Baroni, Guido; Orecchia, Roberto
2013-01-01
The synergy between in-room imaging and optical tracking, in co-operation with highly accurate robotic patient handling represents a concept for patient-set-up which has been implemented at CNAO (Centro Nazionale di Adroterapia Oncologica). In-room imaging is based on a double oblique X-ray projection system; optical tracking consists of the detection of the position of spherical markers placed directly on the patient's skin or on the immobilization devices. These markers are used as external fiducials during patient positioning and dose delivery. This study reports the results of a comparative analysis between in-room imaging and optical tracking data for patient positioning within the framework of high-precision particle therapy. Differences between the optical tracking system (OTS) and the imaging system (IS) were on average within the expected localization accuracy. On the first 633 fractions for head and neck (H&N) set-up procedures, the corrections applied by the IS, after patient positioning using the OTS only, were for the mostly sub-millimetric regarding the translations (0.4±1.1 mm) and sub-gradual regarding the rotations (0.0°±0.8°). On the first 236 fractions for pelvis localizations the amplitude of the corrections applied by the IS after preliminary optical set-up correction were moderately higher and more dispersed (translations: 1.3±2.9 mm, rotations 0.1±0.9°). Although the indication of the OTS cannot replace information provided by in-room imaging devices and 2D-3D image registration, the reported data show that OTS preliminary correction might greatly support image-based patient set-up refinement and also provide a secondary, independent verification system for patient positioning. PMID:23824116
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
1987-03-01
would be transcribed as L =AX - V where L, X, and V are the vectors of constant terms, parametric corrections , and b_o bresiduals, respectively. The...tensor. a Just as du’ represents the parametric corrections in tensor notations, the necessary associated metric tensor a’ corresponds to the variance...observations, n residuals, and 0 n- parametric corrections to X (an initial set of parameters), respectively. b 0 b The vctor L is formed as 1. L where
Interferometric correction system for a numerically controlled machine
Burleson, Robert R.
1978-01-01
An interferometric correction system for a numerically controlled machine is provided to improve the positioning accuracy of a machine tool, for example, for a high-precision numerically controlled machine. A laser interferometer feedback system is used to monitor the positioning of the machine tool which is being moved by command pulses to a positioning system to position the tool. The correction system compares the commanded position as indicated by a command pulse train applied to the positioning system with the actual position of the tool as monitored by the laser interferometer. If the tool position lags the commanded position by a preselected error, additional pulses are added to the pulse train applied to the positioning system to advance the tool closer to the commanded position, thereby reducing the lag error. If the actual tool position is leading in comparison to the commanded position, pulses are deleted from the pulse train where the advance error exceeds the preselected error magnitude to correct the position error of the tool relative to the commanded position.
Headache and refractive errors in children.
Roth, Zachary; Pandolfo, Katie R; Simon, John; Zobal-Ratner, Jitka
2014-01-01
To investigate the association between uncorrected or miscorrected refractive errors in children and headache, and to determine whether correction of refractive errors contributes to headache resolution. Results of ophthalmic examination, including refractive error, were recorded at initial visit for headache. If resolution of headache on subsequent visits was not documented, a telephone call was placed to their caregivers to inquire whether headache had resolved. Of the 158 patients, 75.3% had normal or unchanged eye examinations, including refractions.Follow-up data were available for 110 patients. Among those, 32 received new or changed spectacle correction and 78 did not require a change in refraction.Headaches improved in 76.4% of all patients, whether with (71.9%) or without (78.2%) a change in refractive correction. The difference between these two groups was not statistically significant (P = .38). Headaches in children usually do not appear to be caused by ophthalmic disease, including refractive error. The prognosis for improvement is favorable, regardless of whether refractive correction is required. Copyright 2014, SLACK Incorporated.
Measurement-free implementations of small-scale surface codes for quantum-dot qubits
NASA Astrophysics Data System (ADS)
Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.
2018-01-01
The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.
NASA Astrophysics Data System (ADS)
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Maso, L; Forbang, R Teboh; Zhang, Y
Purpose: To explore the dosimetric consequences of uncorrected rotational setup errors during SBRT for pancreatic cancer patients. Methods: This was a retrospective study utilizing data from ten (n=10) previously treated SBRT pancreas patients. For each original planning CT, we applied rotational transformations to derive additional CT images representative of possible rotational setup errors. This resulted in 6 different sets of rotational combinations, creating a total of 60 CT planning images. The patients’ clinical dosimetric plans were then applied to their corresponding rotated CT images. The 6 rotation sets encompassed a 3, 2 and 1-degree rotation in each rotational direction andmore » a 3-degree in just the pitch, a 3-degree in just the yaw and a 3-degree in just the roll. After the dosimetric plan was applied to the rotated CT images, the resulting plan was then evaluated and compared with the clinical plan for tumor coverage and normal tissue sparing. Results: PTV coverage, defined here by V33 throughout all of the patients’ clinical plans, ranged from 92–98%. After an n degree rotation in each rotational direction that range decreased to 68–87%, 85–92%, and 88– 94% for n=3, 2 and 1 respectively. Normal tissue sparing defined here by the proximal stomach V15 throughout all of the patients’ clinical plans ranged from 0–8.9 cc. After an n degree rotation in each rotational direction that range increased to 0–17 cc, 0–12 cc, and 0–10 cc for n=3, 2, and 1 respectively. Conclusion: For pancreatic SBRT, small rotational setup errors in the pitch, yaw and roll direction on average caused under dosage to PTV and over dosage to proximal normal tissue. The 1-degree rotation was on average the least detrimental to the normal tissue and the coverage of the PTV. The 3-degree yaw created on average the lowest increase in volume coverage to normal tissue. This research was sponsored by the AAPM Education Council through the AAPM Education and Research Fund for the AAPM Summer Undergraduate Fellowship Program.« less
Developing and implementing a high precision setup system
NASA Astrophysics Data System (ADS)
Peng, Lee-Cheng
The demand for high-precision radiotherapy (HPRT) was first implemented in stereotactic radiosurgery using a rigid, invasive stereotactic head frame. Fractionated stereotactic radiotherapy (SRT) with a frameless device was developed along a growing interest in sophisticated treatment with a tight margin and high-dose gradient. This dissertation establishes the complete management for HPRT in the process of frameless SRT, including image-guided localization, immobilization, and dose evaluation. The most ideal and precise positioning system can allow for ease of relocation, real-time patient movement assessment, high accuracy, and no additional dose in daily use. A new image-guided stereotactic positioning system (IGSPS), the Align RT3C 3D surface camera system (ART, VisionRT), which combines 3D surface images and uses a real-time tracking technique, was developed to ensure accurate positioning at the first place. The uncertainties of current optical tracking system, which causes patient discomfort due to additional bite plates using the dental impression technique and external markers, are found. The accuracy and feasibility of ART is validated by comparisons with the optical tracking and cone-beam computed tomography (CBCT) systems. Additionally, an effective daily quality assurance (QA) program for the linear accelerator and multiple IGSPSs is the most important factor to ensure system performance in daily use. Currently, systematic errors from the phantom variety and long measurement time caused by switching phantoms were discovered. We investigated the use of a commercially available daily QA device to improve the efficiency and thoroughness. Reasonable action level has been established by considering dosimetric relevance and clinic flow. As for intricate treatments, the effect of dose deviation caused by setup errors remains uncertain on tumor coverage and toxicity on OARs. The lack of adequate dosimetric simulations based on the true treatment coordinates from the treatment planning system (TPS) has limited adaptive treatments. A reliable and accurate dosimetric simulation using TPS and in-house software in uncorrected errors has been developed. In SRT, the calculated dose deviation is compared to the original treatment dose with the dose-volume histogram to investigate the dose effect of rotational errors. In summary, this work performed a quality assessment to investigate the overall accuracy of current setup systems. To reach the ideal HPRT, the reliable dosimetric simulation, an effective daily QA program and effective, precise setup systems were developed and validated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Christine H.; Gerry, Emily; Chmura, Steven J.
2015-01-01
Purpose: To calculate planning target volume (PTV) margins for chest wall and regional nodal targets using daily orthogonal kilovolt (kV) imaging and to study residual setup error after kV alignment using volumetric cone-beam computed tomography (CBCT). Methods and Materials: Twenty-one postmastectomy patients were treated with intensity modulated radiation therapy with 7-mm PTV margins. Population-based PTV margins were calculated from translational shifts after daily kV positioning and/or weekly CBCT data for each of 8 patients, whose surgical clips were used as surrogates for target volumes. Errors from kV and CBCT data were mathematically combined to generate PTV margins for 3 simulatedmore » alignment workflows: (1) skin marks alone; (2) weekly kV imaging; and (3) daily kV imaging. Results: The kV data from 613 treatment fractions indicated that a 7-mm uniform margin would account for 95% of daily shifts if patients were positioned using only skin marks. Total setup errors incorporating both kV and CBCT data were larger than those from kV alone, yielding PTV expansions of 7 mm anterior–posterior, 9 mm left–right, and 9 mm superior–inferior. Required PTV margins after weekly kV imaging were similar in magnitude as alignment to skin marks, but rotational adjustments of patients were required in 32% ± 17% of treatments. These rotations would have remained uncorrected without the use of daily kV imaging. Despite the use of daily kV imaging, CBCT data taken at the treatment position indicate that an anisotropic PTV margin of 6 mm anterior–posterior, 4 mm left–right, and 8 mm superior–inferior must be retained to account for residual errors. Conclusions: Cone-beam CT provides additional information on 3-dimensional reproducibility of treatment setup for chest wall targets. Three-dimensional data indicate that a uniform 7-mm PTV margin is insufficient in the absence of daily IGRT. Interfraction movement is greater than suggested by 2-dimensional imaging, thus a margin of at least 4 to 8 mm must be retained despite the use of daily IGRT.« less
NASA Astrophysics Data System (ADS)
Alvarez-Garreton, C. D.; Ryu, D.; Western, A. W.; Crow, W. T.; Su, C. H.; Robertson, D. E.
2014-12-01
Flood prediction in poorly monitored catchments is among the greatest challenges faced by hydrologists. To address this challenge, an increasing number of studies in the last decade have explored methods to integrate various existing observations from ground and satellites. One approach in particular, is the assimilation of satellite soil moisture (SM-DA) into rainfall-runoff models. The rationale is that satellite soil moisture (SSM) can be used to correct model soil water states, enabling more accurate prediction of catchment response to precipitation and thus better streamflow. However, there is still no consensus on the most effective SM-DA scheme and how this might depend on catchment scale, climate characteristics, runoff mechanisms, model and SSM products used, etc. In this work, an operational SM-DA scheme was set up in the poorly monitored, large (>40,000 km2), semi-arid Warrego catchment situated in eastern Australia. We assimilated passive and active SSM products into the probability distributed model (PDM) using an ensemble Kalman filter. We explored factors influencing the SM-DA framework, including relatively new techniques to remove model-observation bias, estimate observation errors and represent model errors. Furthermore, we explored the advantages of accounting for the spatial distribution of forcing and channel routing processes within the catchment by implementing and comparing lumped and semi-distributed model setups. Flood prediction is improved by SM-DA (Figure), with a 30% reduction of the average root-mean-squared difference of the ensemble prediction, a 20% reduction of the false alarm ratio and a 40% increase of the ensemble mean Nash-Sutcliffe efficiency. SM-DA skill does not significantly change with different observation error assumptions, but the skill strongly depends on the observational bias correction technique used, and more importantly, on the performance of the open-loop model before assimilation. Our findings imply that proper pre-processing of SSM is important for the efficacy of the SM-DA and assimilation performance is critically affected by the quality of model calibration. We therefore recommend focusing efforts on these two factors, while further evaluating the trade-offs between model complexity and data availability.
High speed fault tolerant secure communication for muon chamber using FPGA based GBTx emulator
NASA Astrophysics Data System (ADS)
Sau, Suman; Mandal, Swagata; Saini, Jogender; Chakrabarti, Amlan; Chattopadhyay, Subhasis
2015-12-01
The Compressed Baryonic Matter (CBM) experiment is a part of the Facility for Antiproton and Ion Research (FAIR) in Darmstadt at the GSI. The CBM experiment will investigate the highly compressed nuclear matter using nucleus-nucleus collisions. This experiment will examine lieavy-ion collisions in fixed target geometry and will be able to measure hadrons, electrons and muons. CBM requires precise time synchronization, compact hardware, radiation tolerance, self-triggered front-end electronics, efficient data aggregation schemes and capability to handle high data rate (up to several TB/s). As a part of the implementation of read out chain of Muon Cliamber(MUCH) [1] in India, we have tried to implement FPGA based emulator of GBTx in India. GBTx is a radiation tolerant ASIC that can be used to implement multipurpose high speed bidirectional optical links for high-energy physics (HEP) experiments and is developed by CERN. GBTx will be used in highly irradiated area and more prone to be affected by multi bit error. To mitigate this effect instead of single bit error correcting RS code we have used two bit error correcting (15, 7) BCH code. It will increase the redundancy which in turn increases the reliability of the coded data. So the coded data will be less prone to be affected by noise due to radiation. The data will go from detector to PC through multiple nodes through the communication channel. The computing resources are connected to a network which can be accessed by authorized person to prevent unauthorized data access which might happen by compromising the network security. Thus data encryption is essential. In order to make the data communication secure, advanced encryption standard [2] (AES - a symmetric key cryptography) and RSA [3], [4] (asymmetric key cryptography) are used after the channel coding. We have implemented GBTx emulator on two Xilinx Kintex-7 boards (KC705). One will act as transmitter and other will act as receiver and they are connected through optical fiber through small form-factor pluggable (SFP) port. We have tested the setup in the runtime environment using Xilinx Cliipscope Pro Analyzer. We also measure the resource utilization, throughput., power optimization of implemented design.
NASA Astrophysics Data System (ADS)
Zait, Eitan; Ben-Zvi, Guy; Dmitriev, Vladimir; Oshemkov, Sergey; Pforr, Rainer; Hennig, Mario
2006-05-01
Intra-field CD variation is, besides OPC errors, a main contributor to the total CD variation budget in IC manufacturing. It is caused mainly by mask CD errors. In advanced memory device manufacturing the minimum features are close to the resolution limit resulting in large mask error enhancement factors hence large intra-field CD variations. Consequently tight CD Control (CDC) of the mask features is required, which results in increasing significantly the cost of mask and hence the litho process costs. Alternatively there is a search for such techniques (1) which will allow improving the intrafield CD control for a given moderate mask and scanner imaging performance. Currently a new technique (2) has been proposed which is based on correcting the printed CD by applying shading elements generated in the substrate bulk of the mask by ultrashort pulsed laser exposure. The blank transmittance across a feature is controlled by changing the density of light scattering pixels. The technique has been demonstrated to be very successful in correcting intra-field CD variations caused by the mask and the projection system (2). A key application criterion of this technique in device manufacturing is the stability of the absorbing pixels against DUV light irradiation being applied during mask projection in scanners. This paper describes the procedures and results of such an investigation. To do it with acceptable effort a special experimental setup has been chosen allowing an evaluation within reasonable time. A 193nm excimer laser with pulse duration of 25 ns has been used for blank irradiation. Accumulated dose equivalent to 100,000 300 mm wafer exposures has been applied to Half Tone PSM mask areas with and without CDC shadowing elements. This allows the discrimination of effects appearing in treated and untreated glass regions. Several intensities have been investigated to define an acceptable threshold intensity to avoid glass compaction or generation of color centers in the glass. The impact of the irradiation on the mask transmittance of both areas has been studied by measurements of the printed CD on wafer using a wafer scanner before and after DUV irradiation.
Pella, A; Riboldi, M; Tagaste, B; Bianculli, D; Desplanques, M; Fontana, G; Cerveri, P; Seregni, M; Fattori, G; Orecchia, R; Baroni, G
2014-08-01
In an increasing number of clinical indications, radiotherapy with accelerated particles shows relevant advantages when compared with high energy X-ray irradiation. However, due to the finite range of ions, particle therapy can be severely compromised by setup errors and geometric uncertainties. The purpose of this work is to describe the commissioning and the design of the quality assurance procedures for patient positioning and setup verification systems at the Italian National Center for Oncological Hadrontherapy (CNAO). The accuracy of systems installed in CNAO and devoted to patient positioning and setup verification have been assessed using a laser tracking device. The accuracy in calibration and image based setup verification relying on in room X-ray imaging system was also quantified. Quality assurance tests to check the integration among all patient setup systems were designed, and records of daily QA tests since the start of clinical operation (2011) are presented. The overall accuracy of the patient positioning system and the patient verification system motion was proved to be below 0.5 mm under all the examined conditions, with median values below the 0.3 mm threshold. Image based registration in phantom studies exhibited sub-millimetric accuracy in setup verification at both cranial and extra-cranial sites. The calibration residuals of the OTS were found consistent with the expectations, with peak values below 0.3 mm. Quality assurance tests, daily performed before clinical operation, confirm adequate integration and sub-millimetric setup accuracy. Robotic patient positioning was successfully integrated with optical tracking and stereoscopic X-ray verification for patient setup in particle therapy. Sub-millimetric setup accuracy was achieved and consistently verified in daily clinical operation.
Xi, Lei; Zhang, Chen; He, Yanling
2018-05-09
To evaluate the refractive and visual outcomes of Transepithelial photorefractive keratectomy (TransPRK) in the treatment of low to moderate myopic astigmatism. This retrospective study enrolled a total of 47 eyes that had undergone Transepithelial photorefractive keratectomy. Preoperative cylinder diopters ranged from - 0.75D to - 2.25D (mean - 1.11 ± 0.40D), and the sphere was between - 1.50D to - 5.75D. Visual outcomes and vector analysis of astigmatism that included error ratio (ER), correction ratio (CR), error of magnitude (EM) and error of angle (EA) were evaluated. At 6 months after TransPRK, all eyes had an uncorrected distance visual acuity of 20/20 or better, no eyes lost ≥2 lines of corrected distant visual acuity (CDVA), and 93.6% had residual refractive cylinder within ±0.50D of intended correction. On vector analysis, the mean correction ratio for refractive cylinder was 1.03 ± 0.30. The mean error magnitude was - 0.04 ± 0.36. The mean error of angle was 0.44° ± 7.42°and 80.9% of eyes had axis shift within ±10°. The absolute astigmatic error of magnitude was statistically significantly correlated with the intended cylinder correction (r = 0.48, P < 0.01). TransPRK showed safe, effective and predictable results in the correction of low to moderate astigmatism and myopia.
ECHO: A reference-free short-read error correction algorithm
Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.
2011-01-01
Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625