Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul
2014-01-01
Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr
DOE Office of Scientific and Technical Information (OSTI.GOV)
Runxiao, L; Aikun, W; Xiaomei, F
2015-06-15
Purpose: To compare two registration methods in the CBCT guided radiotherapy for cervical carcinoma, analyze the setup errors and registration methods, determine the margin required for clinical target volume(CTV) extending to planning target volume(PTV). Methods: Twenty patients with cervical carcinoma were enrolled. All patients were underwent CT simulation in the supine position. Transfering the CT images to the treatment planning system and defining the CTV, PTV and the organs at risk (OAR), then transmit them to the XVI workshop. CBCT scans were performed before radiotherapy and registered to planning CT images according to bone and gray value registration methods. Comparedmore » two methods and obtain left-right(X), superior-inferior(Y), anterior-posterior (Z) setup errors, the margin required for CTV to PTV were calculated. Results: Setup errors were unavoidable in postoperative cervical carcinoma irradiation. The setup errors measured by method of bone (systemic ± random) on X(1eft.right),Y(superior.inferior),Z(anterior.posterior) directions were(0.24±3.62),(0.77±5.05) and (0.13±3.89)mm, respectively, the setup errors measured by method of grey (systemic ± random) on X(1eft-right), Y(superior-inferior), Z(anterior-posterior) directions were(0.31±3.93), (0.85±5.16) and (0.21±4.12)mm, respectively.The spatial distributions of setup error was maximum in Y direction. The margins were 4 mm in X axis, 6 mm in Y axis, 4 mm in Z axis respectively.These two registration methods were similar and highly recommended. Conclusion: Both bone and grey registration methods could offer an accurate setup error. The influence of setup errors of a PTV margin would be suggested by 4mm, 4mm and 6mm on X, Y and Z directions for postoperative radiotherapy for cervical carcinoma.« less
NASA Astrophysics Data System (ADS)
Jung, Jae Hong; Jung, Joo-Young; Bae, Sun Hyun; Moon, Seong Kwon; Cho, Kwang Hwan
2016-10-01
The purpose of this study was to compare patient setup deviations for different image-guided protocols (weekly vs. biweekly) that are used in TomoDirect three-dimensional conformal radiotherapy (TD-3DCRT) for whole-breast radiation therapy (WBRT). A total of 138 defined megavoltage computed tomography (MVCT) image sets from 46 breast cancer cases were divided into two groups based on the imaging acquisition times: weekly or biweekly. The mean error, three-dimensional setup displacement error (3D-error), systematic error (Σ), and random error (σ) were calculated for each group. The 3D-errors were 4.29 ± 1.11 mm and 5.02 ± 1.85 mm for the weekly and biweekly groups, respectively; the biweekly error was 14.6% higher than the weekly error. The systematic errors in the roll angle and the x, y, and z directions were 0.48°, 1.72 mm, 2.18 mm, and 1.85 mm for the weekly protocol and 0.21°, 1.24 mm, 1.39 mm, and 1.85 mm for the biweekly protocol. Random errors in the roll angle and the x, y, and z directions were 25.7%, 40.6%, 40.0%, and 40.8% higher in the biweekly group than in the weekly group. For the x, y, and z directions, the distributions of the treatment frequency at less than 5 mm were 98.6%, 91.3%, and 94.2% in the weekly group and 94.2%, 89.9%, and 82.6% in the biweekly group. Moreover, the roll angles with 0 - 1° were 79.7% and 89.9% in the weekly and the biweekly groups, respectively. Overall, the evaluation of setup deviations for the two protocols revealed no significant differences (p > 0.05). Reducing the frequency of MVCT imaging could have promising effects on imaging doses and machine times during treatment. However, the biweekly protocol was associated with increased random setup deviations in the treatment. We have demonstrated a biweekly protocol of TD-3DCRT for WBRT, and we anticipate that our method may provide an alternative approach for considering the uncertainties in the patient setup.
Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2015-01-01
Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.
2016-01-01
Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velec, Michael; Waldron, John N.; O'Sullivan, Brian
2010-03-01
Purpose: To prospectively compare setup error in standard thermoplastic masks and skin-sparing masks (SSMs) modified with low neck cutouts for head-and-neck intensity-modulated radiation therapy (IMRT) patients. Methods and Materials: Twenty head-and-neck IMRT patients were randomized to be treated in a standard mask (SM) or SSM. Cone-beam computed tomography (CBCT) scans, acquired daily after both initial setup and any repositioning, were used for initial and residual interfraction evaluation, respectively. Weekly, post-IMRT CBCT scans were acquired for intrafraction setup evaluation. The population random (sigma) and systematic (SIGMA) errors were compared for SMs and SSMs. Skin toxicity was recorded weekly by use ofmore » Radiation Therapy Oncology Group criteria. Results: We evaluated 762 CBCT scans in 11 patients randomized to the SM and 9 to the SSM. Initial interfraction sigma was 1.6 mm or less or 1.1 deg. or less for SM and 2.0 mm or less and 0.8 deg. for SSM. Initial interfraction SIGMA was 1.0 mm or less or 1.4 deg. or less for SM and 1.1 mm or less or 0.9 deg. or less for SSM. These errors were reduced before IMRT with CBCT image guidance with no significant differences in residual interfraction or intrafraction uncertainties between SMs and SSMs. Intrafraction sigma and SIGMA were less than 1 mm and less than 1 deg. for both masks. Less severe skin reactions were observed in the cutout regions of the SSM compared with non-cutout regions. Conclusions: Interfraction and intrafraction setup error is not significantly different for SSMs and conventional masks in head-and-neck radiation therapy. Mask cutouts should be considered for these patients in an effort to reduce skin toxicity.« less
Batumalai, Vikneswary; Phan, Penny; Choong, Callie; Holloway, Lois; Delaney, Geoff P
2016-12-01
To compare the differences in setup errors measured with electronic portal image (EPI) and cone-beam computed tomography (CBCT) in patients undergoing tangential breast radiotherapy (RT). Relationship between setup errors, body mass index (BMI) and breast size was assessed. Twenty-five patients undergoing postoperative RT to the breast were consented for this study. Weekly CBCT scans were acquired and retrospectively registered to the planning CT in three dimensions, first using bony anatomy for bony registration (CBCT-B) and again using breast tissue outline for soft tissue registration (CBCT-S). Digitally reconstructed radiographs (DRR) generated from CBCT to simulate EPI were compared to the planning DRR using bony anatomy in the V (parallel to the cranio-caudal axis) and U (perpendicular to V) planes. The systematic (Σ) and random (σ) errors were calculated and correlated with BMI and breast size. The systematic and random errors for EPI (Σ V = 3.7 mm, Σ U = 2.8 mm and σ V = 2.9 mm, σ U = 2.5) and CBCT-B (Σ V = 3.5 mm, Σ U = 3.4 mm and σ V = 2.8 mm, σ U = 2.8) were of similar magnitude in the V and U planes. Similarly, the differences in setup errors for CBCT-B and CBCT-S in three dimensions were less than 1 mm. Only CBCT-S setup error correlated with BMI and breast size. CBCT and EPI show insignificant variation in their ability to detect setup error. These findings suggest no significant differences that would make one modality considered superior over the other and EPI should remain the standard of care for most patients. However, there is a correlation with breast size, BMI and setup error as detected by CBCT-S, justifying the use of CBCT-S for larger patients. © 2016 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja
2013-02-01
Purpose: To investigate the applicability of 3-dimensional (3D) surface imaging for image guidance in deep-inspiration breath-hold radiation therapy (DIBH-RT) for patients with left-sided breast cancer. For this purpose, setup data based on captured 3D surfaces was compared with setup data based on cone beam computed tomography (CBCT). Methods and Materials: Twenty patients treated with DIBH-RT after breast-conserving surgery (BCS) were included. Before the start of treatment, each patient underwent a breath-hold CT scan for planning purposes. During treatment, dose delivery was preceded by setup verification using CBCT of the left breast. 3D surfaces were captured by a surface imaging systemmore » concurrently with the CBCT scan. Retrospectively, surface registrations were performed for CBCT to CT and for a captured 3D surface to CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic error, random error, and 95% limits of agreement were calculated. Furthermore, receiver operating characteristic (ROC) analysis was performed. Results: Good correlation between setup errors was found: R{sup 2}=0.70, 0.90, 0.82 in left-right, craniocaudal, and anterior-posterior directions, respectively. Systematic errors were {<=}0.17 cm in all directions. Random errors were {<=}0.15 cm. The limits of agreement were -0.34-0.48, -0.42-0.39, and -0.52-0.23 cm in left-right, craniocaudal, and anterior-posterior directions, respectively. ROC analysis showed that a threshold between 0.4 and 0.8 cm corresponds to promising true positive rates (0.78-0.95) and false positive rates (0.12-0.28). Conclusions: The results support the application of 3D surface imaging for image guidance in DIBH-RT after BCS.« less
Mock, U; Dieckmann, K; Wolff, U; Knocke, T H; Pötter, R
1999-08-01
Geometrical accuracy in patient positioning can vary substantially during external radiotherapy. This study estimated the set-up accuracy during pelvic irradiation for gynecological malignancies for determination of safety margins (planning target volume, PTV). Based on electronic portal imaging devices (EPID), 25 patients undergoing 4-field pelvic irradiation for gynecological malignancies were analyzed with regard to set-up accuracy during the treatment course. Regularly performed EPID images were used in order to systematically assess the systematic and random component of set-up displacements. Anatomical matching of verification and simulation images was followed by measuring corresponding distances between the central axis and anatomical features. Data analysis of set-up errors referred to the x-, y-,and z-axes. Additionally, cumulative frequencies were evaluated. A total of 50 simulation films and 313 verification images were analyzed. For the anterior-posterior (AP) beam direction mean deviations along the x- and z-axes were 1.5 mm and -1.9 mm, respectively. Moreover, random errors of 4.8 mm (x-axis) and 3.0 mm (z-axis) were determined. Concerning the latero-lateral treatment fields, the systematic errors along the two axes were calculated to 2.9 mm (y-axis) and -2.0 mm (z-axis) and random errors of 3.8 mm and 3.5 mm were found, respectively. The cumulative frequency of misalignments < or =5 mm showed values of 75% (AP fields) and 72% (latero-lateral fields). With regard to cumulative frequencies < or =10 mm quantification revealed values of 97% for both beam directions. During external pelvic irradiation therapy for gynecological malignancies, EPID images on a regular basis revealed acceptable set-up inaccuracies. Safety margins (PTV) of 1 cm appear to be sufficient, accounting for more than 95% of all deviations.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-06-01
The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, JY; Hong, DL
Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed.more » Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Jin, H; Hossain, S
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H
Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
Kirby, Anna M; Lee, Steven F; Bartlett, Freddie; Titmarsh, Kumud; Donovan, Ellen; Griffin, Clare L; Gothard, Lone; Locke, Imogen; McNair, Helen A
2016-01-01
Objective: The purpose of this UK study was to evaluate interfraction reproducibility and body image score when using ultraviolet (UV) tattoos (not visible in ambient lighting) for external references during breast/chest wall radiotherapy and compare with conventional dark ink. Methods: In this non-blinded, single-centre, parallel group, randomized control trial, patients were allocated to receive either conventional dark ink or UV ink tattoos using computer-generated random blocks. Participant assignment was not masked. Systematic (∑) and random (σ) setup errors were determined using electronic portal images. Body image questionnaires were completed at pre-treatment, 1 month and 6 months to determine the impact of tattoo type on body image. The primary end point was to determine that UV tattoo random error (σsetup) was no less accurate than with conventional dark ink tattoos, i.e. <2.8 mm. Results: 46 patients were randomized to receive conventional dark or UV ink tattoos. 45 patients completed treatment (UV: n = 23, dark: n = 22). σsetup for the UV tattoo group was <2.8 mm in the u and v directions (p = 0.001 and p = 0.009, respectively). A larger proportion of patients reported improvement in body image score in the UV tattoo group compared with the dark ink group at 1 month [56% (13/23) vs 14% (3/22), respectively] and 6 months [52% (11/21) vs 38% (8/21), respectively]. Conclusion: UV tattoos were associated with interfraction setup reproducibility comparable with conventional dark ink. Patients reported a more favourable change in body image score up to 6 months following treatment. Advances in knowledge: This study is the first to evaluate UV tattoo external references in a randomized control trial. PMID:27710100
Landeg, Steven J; Kirby, Anna M; Lee, Steven F; Bartlett, Freddie; Titmarsh, Kumud; Donovan, Ellen; Griffin, Clare L; Gothard, Lone; Locke, Imogen; McNair, Helen A
2016-12-01
The purpose of this UK study was to evaluate interfraction reproducibility and body image score when using ultraviolet (UV) tattoos (not visible in ambient lighting) for external references during breast/chest wall radiotherapy and compare with conventional dark ink. In this non-blinded, single-centre, parallel group, randomized control trial, patients were allocated to receive either conventional dark ink or UV ink tattoos using computer-generated random blocks. Participant assignment was not masked. Systematic (∑) and random (σ) setup errors were determined using electronic portal images. Body image questionnaires were completed at pre-treatment, 1 month and 6 months to determine the impact of tattoo type on body image. The primary end point was to determine that UV tattoo random error (σ setup ) was no less accurate than with conventional dark ink tattoos, i.e. <2.8 mm. 46 patients were randomized to receive conventional dark or UV ink tattoos. 45 patients completed treatment (UV: n = 23, dark: n = 22). σ setup for the UV tattoo group was <2.8 mm in the u and v directions (p = 0.001 and p = 0.009, respectively). A larger proportion of patients reported improvement in body image score in the UV tattoo group compared with the dark ink group at 1 month [56% (13/23) vs 14% (3/22), respectively] and 6 months [52% (11/21) vs 38% (8/21), respectively]. UV tattoos were associated with interfraction setup reproducibility comparable with conventional dark ink. Patients reported a more favourable change in body image score up to 6 months following treatment. Advances in knowledge: This study is the first to evaluate UV tattoo external references in a randomized control trial.
Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-01-01
Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients. PMID:25882689
Analysis of Prostate Patient Setup and Tracking Data: Potential Intervention Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su Zhong, E-mail: zsu@floridaproton.org; Zhang Lisha; Murphy, Martin
Purpose: To evaluate the setup, interfraction, and intrafraction organ motion error distributions and simulate intrafraction intervention strategies for prostate radiotherapy. Methods and Materials: A total of 17 patients underwent treatment setup and were monitored using the Calypso system during radiotherapy. On average, the prostate tracking measurements were performed for 8 min/fraction for 28 fractions for each patient. For both patient couch shift data and intrafraction organ motion data, the systematic and random errors were obtained from the patient population. The planning target volume margins were calculated using the van Herk formula. Two intervention strategies were simulated using the tracking data:more » the deviation threshold and period. The related planning target volume margins, time costs, and prostate position 'fluctuation' were presented. Results: The required treatment margin for the left-right, superoinferior, and anteroposterior axes was 8.4, 10.8, and 14.7 mm for skin mark-only setup and 1.3, 2.3, and 2.8 mm using the on-line setup correction, respectively. Prostate motion significantly correlated among the superoinferior and anteroposterior directions. Of the 17 patients, 14 had prostate motion within 5 mm of the initial setup position for {>=}91.6% of the total tracking time. The treatment margin decreased to 1.1, 1.8, and 2.3 mm with a 3-mm threshold correction and to 0.5, 1.0, and 1.5 mm with an every-2-min correction in the left-right, superoinferior, and anteroposterior directions, respectively. The periodic corrections significantly increase the treatment time and increased the number of instances when the setup correction was made during transient excursions. Conclusions: The residual systematic and random error due to intrafraction prostate motion is small after on-line setup correction. Threshold-based and time-based intervention strategies both reduced the planning target volume margins. The time-based strategies increased the treatment time and the in-fraction position fluctuation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laaksomaa, Marko, E-mail: marko.laaksomaa@pshp.fi; Kapanen, Mika; Department of Medical Physics, Tampere University Hospital
We evaluated adequate setup margins for the radiotherapy (RT) of pelvic tumors based on overall position errors of bony landmarks. We also estimated the difference in setup accuracy between the male and female patients. Finally, we compared the patient rotation for 2 immobilization devices. The study cohort included consecutive 64 male and 64 female patients. Altogether, 1794 orthogonal setup images were analyzed. Observer-related deviation in image matching and the effect of patient rotation were explicitly determined. Overall systematic and random errors were calculated in 3 orthogonal directions. Anisotropic setup margins were evaluated based on residual errors after weekly image guidance.more » The van Herk formula was used to calculate the margins. Overall, 100 patients were immobilized with a house-made device. The patient rotation was compared against 28 patients immobilized with CIVCO's Kneefix and Feetfix. We found that the usually applied isotropic setup margin of 8 mm covered all the uncertainties related to patient setup for most RT treatments of the pelvis. However, margins of even 10.3 mm were needed for the female patients with very large pelvic target volumes centered either in the symphysis or in the sacrum containing both of these structures. This was because the effect of rotation (p ≤ 0.02) and the observer variation in image matching (p ≤ 0.04) were significantly larger for the female patients than for the male patients. Even with daily image guidance, the required margins remained larger for the women. Patient rotations were largest about the lateral axes. The difference between the required margins was only 1 mm for the 2 immobilization devices. The largest component of overall systematic position error came from patient rotation. This emphasizes the need for rotation correction. Overall, larger position errors and setup margins were observed for the female patients with pelvic cancer than for the male patients.« less
Set-up uncertainties: online correction with X-ray volume imaging.
Kataria, Tejinder; Abhishek, Ashu; Chadha, Pranav; Nandigam, Janardhan
2011-01-01
To determine interfractional three-dimensional set-up errors using X-ray volumetric imaging (XVI). Between December 2007 and August 2009, 125 patients were taken up for image-guided radiotherapy using online XVI. After matching of reference and acquired volume view images, set-up errors in three translation directions were recorded and corrected online before treatment each day. Mean displacements, population systematic (Σ), and random (σ) errors were calculated and analyzed using SPSS (v16) software. Optimum clinical target volume (CTV) to planning target volume (PTV) margin was calculated using Van Herk's (2.5Σ + 0.7 σ) and Stroom's (2Σ + 0.7 σ) formula. Patients were grouped in 4 cohorts, namely brain, head and neck, thorax, and abdomen-pelvis. The mean vector displacement recorded were 0.18 cm, 0.15 cm, 0.36 cm, and 0.35 cm for brain, head and neck, thorax, and abdomen-pelvis, respectively. Analysis of individual mean set-up errors revealed good agreement with the proposed 0.3 cm isotropic margins for brain and 0.5 cm isotropic margins for head-neck. Similarly, 0.5 cm circumferential and 1 cm craniocaudal proposed margins were in agreement with thorax and abdomen-pelvic cases. The calculated mean displacements were well within CTV-PTV margin estimates of Van Herk (90% population coverage to minimum 95% prescribed dose) and Stroom (99% target volume coverage by 95% prescribed dose). Employing these individualized margins in a particular cohort ensure comparable target coverage as described in literature, which is further improved if XVI-aided set-up error detection and correction is used before treatment.
Managing numerical errors in random sequential adsorption
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Nowak, Aleksandra
2016-09-01
Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, H; Wang, W; Hu, W
2014-06-01
Purpose: To quantify setup errors by pretreatment kilovolt cone-beam computed tomography(KV-CBCT) scans for middle or distal esophageal carcinoma patients. Methods: Fifty-two consecutive middle or distal esophageal carcinoma patients who underwent IMRT were included this study. A planning CT scan using a big-bore CT simulator was performed in the treatment position and was used as the reference scan for image registration with CBCT. CBCT scans(On-Board Imaging v1. 5 system, Varian Medical Systems) were acquired daily during the first treatment week. A total of 260 CBCT scans was assessed with a registration clip box defined around the PTV-thorax in the reference scanmore » based on(nine CBCTs per patient) bony anatomy using Offline Review software v10.0(Varian Medical Systems). The anterior-posterior(AP), left-right(LR), superiorinferior( SI) corrections were recorded. The systematic and random errors were calculated. The CTV-to-PTV margins in each CBCT frequency was based on the Van Herk formula (2.5Σ+0.7σ). Results: The SD of systematic error (Σ) was 2.0mm, 2.3mm, 3.8mm in the AP, LR and SI directions, respectively. The average random error (σ) was 1.6mm, 2.4mm, 4.1mm in the AP, LR and SI directions, respectively. The CTV-to-PTV safety margin was 6.1mm, 7.5mm, 12.3mm in the AP, LR and SI directions based on van Herk formula. Conclusion: Our data recommend the use of 6 mm, 8mm, and 12 mm for esophageal carcinoma patient setup in AP, LR, SI directions, respectively.« less
Defining robustness protocols: a method to include and evaluate robustness in clinical plans
NASA Astrophysics Data System (ADS)
McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.
2015-04-01
We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.
Wang, He; Wang, Congjun; Tung, Samuel; Dimmitt, Andrew Wilson; Wong, Pei Fong; Edson, Mark A.; Garden, Adam S.; Rosenthal, David I.; Fuller, Clifton D.; Gunn, Gary B.; Takiar, Vinita; Wang, Xin A.; Luo, Dershan; Yang, James N.; Wong, Jennifer
2016-01-01
The purpose of this study was to investigate the setup and positioning uncertainty of a custom cushion/mask/bite‐block (CMB) immobilization system and determine PTV margin for image‐guided head and neck stereotactic ablative radiotherapy (HN‐SABR). We analyzed 105 treatment sessions among 21 patients treated with HN‐SABR for recurrent head and neck cancers using a custom CMB immobilization system. Initial patient setup was performed using the ExacTrac infrared (IR) tracking system and initial setup errors were based on comparison of ExacTrac IR tracking system to corrected online ExacTrac X‐rays images registered to treatment plans. Residual setup errors were determined using repeat verification X‐ray. The online ExacTrac corrections were compared to cone‐beam CT (CBCT) before treatment to assess agreement. Intrafractional positioning errors were determined using prebeam X‐rays. The systematic and random errors were analyzed. The initial translational setup errors were −0.8±1.3 mm, −0.8±1.6 mm, and 0.3±1.9 mm in AP, CC, and LR directions, respectively, with a three‐dimensional (3D) vector of 2.7±1.4 mm. The initial rotational errors were up to 2.4° if 6D couch is not available. CBCT agreed with ExacTrac X‐ray images to within 2 mm and 2.5°. The intrafractional uncertainties were 0.1±0.6 mm, 0.1±0.6 mm, and 0.2±0.5 mm in AP, CC, and LR directions, respectively, and 0.0∘±0.5°, 0.0∘±0.6°, and −0.1∘±0.4∘ in yaw, roll, and pitch direction, respectively. The translational vector was 0.9±0.6 mm. The calculated PTV margins mPTV(90,95) were within 1.6 mm when using image guidance for online setup correction. The use of image guidance for online setup correction, in combination with our customized CMB device, highly restricted target motion during treatments and provided robust immobilization to ensure minimum dose of 95% to target volume with 2.0 mm PTV margin for HN‐SABR. PACS number(s): 87.55.ne PMID:27167275
Schill, Matthew R.; Varela, J. Esteban; Frisella, Margaret M.; Brunt, L. Michael
2015-01-01
Background We compared performance of validated laparoscopic tasks on four commercially available single site access (SSA) access devices (AD) versus an independent port (IP) SSA set-up. Methods A prospective, randomized comparison of laparoscopic skills performance on four AD (GelPOINT™, SILS™ Port, SSL Access System™, TriPort™) and one IP SSA set-up was conducted. Eighteen medical students (2nd–4th year), four surgical residents, and five attending surgeons were trained to proficiency in multi-port laparoscopy using four laparoscopic drills (peg transfer, bean drop, pattern cutting, extracorporeal suturing) in a laparoscopic trainer box. Drills were then performed in random order on each IP-SSA and AD-SSA set-up using straight laparoscopic instruments. Repetitions were timed and errors recorded. Data are mean ± SD, and statistical analysis was by two-way ANOVA with Tukey HSD post-hoc tests. Results Attending surgeons had significantly faster total task times than residents or students (p< 0.001), but the difference between residents and students was NS. Pair-wise comparisons revealed significantly faster total task times for the IP-SSA set-up compared to all four AD-SSA’s within the student group only (p<0.05). Total task times for residents and attending surgeons showed a similar profile, but the differences were NS. When data for the three groups was combined, the total task time was less for the IP-SSA set-up than for each of the four AD-SSA set-ups (p < 0.001). Similarly,, the IP-SSA set-up was significantly faster than 3 of 4 AD-SSA set-ups for peg transfer, 3 of 4 for pattern cutting, and 2 of 4 for suturing. No significant differences in error rates between IP-SSA and AD-SSA set-ups were detected. Conclusions When compared to an IP-SSA laparoscopic set-up, single site access devices are associated with longer task performance times in a trainer box model, independent of level of training. Task performance was similar across different SSA devices. PMID:21993938
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Oh, S; Yea, J
Purpose: This study evaluated the setup uncertainties for brain sites when using BrainLAB’s ExacTrac X-ray 6D system for daily pretreatment to determine the optimal planning target volume (PTV) margin. Methods: Between August 2012 and April 2015, 28 patients with brain tumors were treated by daily image-guided radiotherapy using the BrainLAB ExacTrac 6D image guidance system of the Novalis-Tx linear accelerator. DUONTM (Orfit Industries, Wijnegem, Belgium) masks were used to fix the head. The radiotherapy was fractionated into 27–33 treatments. In total, 844 image verifications were performed for 28 patients and used for the analysis. The setup corrections along with themore » systematic and random errors were analyzed for six degrees of freedom in the translational (lateral, longitudinal, and vertical) and rotational (pitch, roll, and yaw) dimensions. Results: Optimal PTV margins were calculated based on van Herk et al.’s [margin recipe = 2.5∑ + 0.7σ − 3 mm] and Stroom et al.’s [margin recipe = 2∑ + 0.7σ] formulas. The systematic errors (∑) were 0.72, 1.57, and 0.97 mm in the lateral, longitudinal, and vertical translational dimensions, respectively, and 0.72°, 0.87°, and 0.83° in the pitch, roll, and yaw rotational dimensions, respectively. The random errors (σ) were 0.31, 0.46, and 0.54 mm in the lateral, longitudinal, and vertical rotational dimensions, respectively, and 0.28°, 0.24°, and 0.31° in the pitch, roll, and yaw rotational dimensions, respectively. According to van Herk et al.’s and Stroom et al.’s recipes, the recommended lateral PTV margins were 0.97 and 1.66 mm, respectively; the longitudinal margins were 1.26 and 3.47 mm, respectively; and the vertical margins were 0.21 and 2.31 mm, respectively. Conclusion: Therefore, daily setup verifications using the BrainLAB ExacTrac 6D image guide system are very useful for evaluating the setup uncertainties and determining the setup margin.∑σ.« less
Balter, Peter; Morice, Rodolfo C.; Choi, Bum; Kudchadker, Rajat J.; Bucci, Kara; Chang, Joe Y.; Dong, Lei; Tucker, Susan; Vedam, Sastry; Briere, Tina; Starkschall, George
2008-01-01
This study aimed to validate and implement a methodology in which fiducials implanted in the periphery of lung tumors can be used to reduce uncertainties in tumor location. Alignment software that matches marker positions on two‐dimensional (2D) kilovoltage portal images to positions on three‐dimensional (3D) computed tomography data sets was validated using static and moving phantoms. This software also was used to reduce uncertainties in tumor location in a patient with fiducials implanted in the periphery of a lung tumor. Alignment of fiducial locations in orthogonal projection images with corresponding fiducial locations in 3D data sets can position both static and moving phantoms with an accuracy of 1 mm. In a patient, alignment based on fiducial locations reduced systematic errors in the left–right direction by 3 mm and random errors by 2 mm, and random errors in the superior–inferior direction by 3 mm as measured by anterior–posterior cine images. Software that matches fiducial markers on 2D and 3D images is effective for aligning both static and moving fiducials before treatment and can be implemented to reduce patient setup uncertainties. PACS number: 81.40.Wx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lilie L., E-mail: lin@uphs.upenn.edu; Hertan, Lauren; Rengan, Ramesh
2012-06-01
Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed.more » To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.« less
Sturgeon, Jared D; Cox, John A; Mayo, Lauren L; Gunn, G Brandon; Zhang, Lifei; Balter, Peter A; Dong, Lei; Awan, Musaddiq; Kocak-Uzel, Esengul; Mohamed, Abdallah Sherif Radwan; Rosenthal, David I; Fuller, Clifton David
2015-10-01
Digitally reconstructed radiographs (DRRs) are routinely used as an a priori reference for setup correction in radiotherapy. The spatial resolution of DRRs may be improved to reduce setup error in fractionated radiotherapy treatment protocols. The influence of finer CT slice thickness reconstruction (STR) and resultant increased resolution DRRs on physician setup accuracy was prospectively evaluated. Four head and neck patient CT-simulation images were acquired and used to create DRR cohorts by varying STRs at 0.5, 1, 2, 2.5, and 3 mm. DRRs were displaced relative to a fixed isocenter using 0-5 mm random shifts in the three cardinal axes. Physician observers reviewed DRRs of varying STRs and displacements and then aligned reference and test DRRs replicating daily KV imaging workflow. A total of 1,064 images were reviewed by four blinded physicians. Observer errors were analyzed using nonparametric statistics (Friedman's test) to determine whether STR cohorts had detectably different displacement profiles. Post hoc bootstrap resampling was applied to evaluate potential generalizability. The observer-based trial revealed a statistically significant difference between cohort means for observer displacement vector error ([Formula: see text]) and for [Formula: see text]-axis [Formula: see text]. Bootstrap analysis suggests a 15% gain in isocenter translational setup error with reduction of STR from 3 mm to [Formula: see text]2 mm, though interobserver variance was a larger feature than STR-associated measurement variance. Higher resolution DRRs generated using finer CT scan STR resulted in improved observer performance at shift detection and could decrease operator-dependent geometric error. Ideally, CT STRs [Formula: see text]2 mm should be utilized for DRR generation in the head and neck.
Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per
2017-06-01
Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.
Reduction of prostate intrafraction motion using gas-release rectal balloons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su Zhong; Zhao Tianyu; Li Zuofeng
2012-10-15
Purpose: To analyze prostate intrafraction motion using both non-gas-release (NGR) and gas-release (GR) rectal balloons and to evaluate the ability of GR rectal balloons to reduce prostate intrafraction motion. Methods: Twenty-nine patients with NGR rectal balloons and 29 patients with GR balloons were randomly selected from prostate patients treated with proton therapy at University of Florida Proton Therapy Institute (Jacksonville, FL). Their pretreatment and post-treatment orthogonal radiographs were analyzed, and both pretreatment setup residual error and intrafraction-motion data were obtained. Population histograms of intrafraction motion were plotted for both types of balloons. Population planning target-volume (PTV) margins were calculated withmore » the van Herk formula of 2.5{Sigma}+ 0.7{sigma} to account for setup residual errors and intrafraction motion errors. Results: Pretreatment and post-treatment radiographs indicated that the use of gas-release rectal balloons reduced prostate intrafraction motion along superior-inferior (SI) and anterior-posterior (AP) directions. Similar patient setup residual errors were exhibited for both types of balloons. Gas-release rectal balloons resulted in PTV margin reductions from 3.9 to 2.8 mm in the SI direction, 3.1 to 1.8 mm in the AP direction, and an increase from 1.9 to 2.1 mm in the left-right direction. Conclusions: Prostate intrafraction motion is an important uncertainty source in radiotherapy after image-guided patient setup with online corrections. Compared to non-gas-release rectal balloons, gas-release balloons can reduce prostate intrafraction motion in the SI and AP directions caused by gas buildup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Samantha, E-mail: samantha.warren@oncology.ox.ac.uk; Partridge, Mike; Bolsi, Alessandra
Purpose: Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods andmore » Materials: For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV){sub 50Gy} or PTV{sub 62.5Gy} (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results: SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D{sub 98} was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D{sub 98} was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D{sub 98} was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D{sub 98} was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions: The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial.« less
Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J.; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A.
2016-01-01
Purpose Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. Methods and Materials For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose–volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. Results SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. Conclusions The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. PMID:27084641
Warren, Samantha; Partridge, Mike; Bolsi, Alessandra; Lomax, Anthony J; Hurt, Chris; Crosby, Thomas; Hawkins, Maria A
2016-05-01
Planning studies to compare x-ray and proton techniques and to select the most suitable technique for each patient have been hampered by the nonequivalence of several aspects of treatment planning and delivery. A fair comparison should compare similarly advanced delivery techniques from current clinical practice and also assess the robustness of each technique. The present study therefore compared volumetric modulated arc therapy (VMAT) and single-field optimization (SFO) spot scanning proton therapy plans created using a simultaneous integrated boost (SIB) for dose escalation in midesophageal cancer and analyzed the effect of setup and range uncertainties on these plans. For 21 patients, SIB plans with a physical dose prescription of 2 Gy or 2.5 Gy/fraction in 25 fractions to planning target volume (PTV)50Gy or PTV62.5Gy (primary tumor with 0.5 cm margins) were created and evaluated for robustness to random setup errors and proton range errors. Dose-volume metrics were compared for the optimal and uncertainty plans, with P<.05 (Wilcoxon) considered significant. SFO reduced the mean lung dose by 51.4% (range 35.1%-76.1%) and the mean heart dose by 40.9% (range 15.0%-57.4%) compared with VMAT. Proton plan robustness to a 3.5% range error was acceptable. For all patients, the clinical target volume D98 was 95.0% to 100.4% of the prescribed dose and gross tumor volume (GTV) D98 was 98.8% to 101%. Setup error robustness was patient anatomy dependent, and the potential minimum dose per fraction was always lower with SFO than with VMAT. The clinical target volume D98 was lower by 0.6% to 7.8% of the prescribed dose, and the GTV D98 was lower by 0.3% to 2.2% of the prescribed GTV dose. The SFO plans achieved significant sparing of normal tissue compared with the VMAT plans for midesophageal cancer. The target dose coverage in the SIB proton plans was less robust to random setup errors and might be unacceptable for certain patients. Robust optimization to ensure adequate target coverage of SIB proton plans might be beneficial. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Evaluation of a head-repositioner and Z-plate system for improved accuracy of dose delivery.
Charney, Sarah C; Lutz, Wendell R; Klein, Mary K; Jones, Pamela D
2009-01-01
Radiation therapy requires accurate dose delivery to targets often identifiable only on computed tomography (CT) images. Translation between the isocenter localized on CT and laser setup for radiation treatment, and interfractional head repositioning are frequent sources of positioning error. The objective was to design a simple, accurate apparatus to eliminate these sources of error. System accuracy was confirmed with phantom and in vivo measurements. A head repositioner that fixates the maxilla via dental mold with fiducial marker Z-plates attached was fabricated to facilitate the connection between the isocenter on CT and laser treatment setup. A phantom study targeting steel balls randomly located within the head repositioner was performed. The center of each ball was marked on a transverse CT slice on which six points of the Z-plate were also visible. Based on the relative position of the six Z-plate points and the ball center, the laser setup position on each Z-plate and a top plate was calculated. Based on these setup marks, orthogonal port films, directed toward each target, were evaluated for accuracy without regard to visual setup. A similar procedure was followed to confirm accuracy of in vivo treatment setups in four dogs using implanted gold seeds. Sequential port films of three dogs were made to confirm interfractional accuracy. Phantom and in vivo measurements confirmed accuracy of 2 mm between isocenter on CT and the center of the treatment dose distribution. Port films confirmed similar accuracy for interfractional treatments. The system reliably connects CT target localization to accurate initial and interfractional radiation treatment setup.
SU-E-J-15: A Patient-Centered Scheme to Mitigate Impacts of Treatment Setup Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; Southern Medical University, Guangzhou; Tian, Z
2014-06-01
Purpose: Current Intensity Modulated Radiation Therapy (IMRT) is plan-centered. At each treatment fraction, we position the patient to match the setup in treatment plan. Inaccurate setup can compromise delivered dose distribution, and hence leading to suboptimal treatments. Moreover, current setup approach via couch shift under image guidance can correct translational errors, while rotational and deformation errors are hard to address. To overcome these problems, we propose in this abstract a patient-centered scheme to mitigate impacts of treatment setup errors. Methods: In the patient-centered scheme, we first position the patient on the couch approximately matching the planned-setup. Our Supercomputing Online Replanningmore » Environment (SCORE) is then employed to design an optimal treatment plan based on the daily patient geometry. It hence mitigates the impacts of treatment setup error and reduces the requirements on setup accuracy. We have conducted simulations studies in 10 head-and-neck (HN) patients to investigate the feasibility of this scheme. Rotational and deformation setup errors were simulated. Specifically, 1, 3, 5, 7 degrees of rotations were put on pitch, roll, and yaw directions; deformation errors were simulated by splitting neck movements into four basic types: rotation, lateral bending, flexion and extension. Setup variation ranges are based on observed numbers in previous studies. Dosimetric impacts of our scheme were evaluated on PTVs and OARs in comparison with original plan dose with original geometry and original plan recalculated dose with new setup geometries. Results: With conventional plan-centered approach, setup error could lead to significant PTV D99 decrease (−0.25∼+32.42%) and contralateral-parotid Dmean increase (−35.09∼+42.90%). The patientcentered approach is effective in mitigating such impacts to 0∼+0.20% and −0.03∼+5.01%, respectively. Computation time is <128 s. Conclusion: Patient-centered scheme is proposed to mitigate setup error impacts using replanning. Its superiority in terms of dosimetric impacts and feasibility has been shown through simulation studies on HN cases.« less
Prasad, Devleena; Das, Pinaki; Saha, Niladri S; Chatterjee, Sanjoy; Achari, Rimpa; Mallick, Indranil
2014-01-01
This aim of this study was to determine if a less resource-intensive and established offline correction protocol - the No Action Level (NAL) protocol was as effective as daily online corrections of setup deviations in curative high-dose radiotherapy of prostate cancer. A total of 683 daily megavoltage CT (MVCT) or kilovoltage CT (kvCBCT) images of 30 patients with localized prostate cancer treated with intensity modulated radiotherapy were evaluated. Daily image-guidance was performed and setup errors in three translational axes recorded. The NAL protocol was simulated by using the mean shift calculated from the first five fractions and implemented on all subsequent treatments. Using the imaging data from the remaining fractions, the daily residual error (RE) was determined. The proportion of fractions where the RE was greater than 3,5 and 7 mm was calculated, and also the actual PTV margin that would be required if the offline protocol was followed. Using the NAL protocol reduced the systematic but not the random errors. Corrections made using the NAL protocol resulted in small and acceptable RE in the mediolateral (ML) and superoinferior (SI) directions with 46/533 (8.1%) and 48/533 (5%) residual shifts above 5 mm. However; residual errors greater than 5mm in the anteroposterior (AP) direction remained in 181/533 (34%) of fractions. The PTV margins calculated based on residual errors were 5mm, 5mm and 13 mm in the ML, SI and AP directions respectively. Offline correction using the NAL protocol resulted in unacceptably high residual errors in the AP direction, due to random uncertainties of rectal and bladder filling. Daily online imaging and corrections remain the standard image guidance policy for highly conformal radiotherapy of prostate cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, K; Wang, J; Liu, D
2014-06-01
Purpose: Image-guided radiation therapy (IGRT) is one of the major treatment of esophageal cancer. Gray value registration and bone registration are two kinds of image registration, the purpose of this work is to compare which one is more suitable for esophageal cancer patients. Methods: Twenty three esophageal patients were treated by Elekta Synergy, CBCT images were acquired and automatically registered to planning kilovoltage CT scans according to gray value or bone registration. The setup errors were measured in the X, Y and Z axis, respectively. Two kinds of setup errors were analysed by matching T test statistical method. Results: Fourmore » hundred and five groups of CBCT images were available and the systematic and random setup errors (cm) in X, Y, Z directions were 0.35, 0.63, 0.29 and 0.31, 0.53, 0.21 with gray value registration, while 0.37, 0.64, 0.26 and 0.32, 0.55, 0.20 with bone registration, respectively. Compared with bone registration and gray value registration, the setup errors in X and Z axis have significant differences. In Y axis, both measurement comparison results of T value is 0.256 (P value > 0.05); In X axis, the T value is 5.287(P value < 0.05); In Z axis, the T value is −5.138 (P value < 0.05). Conclusion: Gray value registration is recommended in image-guided radiotherapy for esophageal cancer and the other thoracic tumors. Manual registration could be applied when it is necessary. Bone registration is more suitable for the head tumor and pelvic tumor department where composed of redundant interconnected and immobile bone tissue.« less
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Yong; Zhou, Yong-Kang; Chen, Yi-Xing
Objective: A comprehensive clinical evaluation was conducted, assessing the Body Pro-Lok immobilization and positioning system to facilitate hypofractionated radiotherapy of intrahepatic hepatocellular carcinoma (HCC), using helical tomotherapy to improve treatment precision. Methods: Clinical applications of the Body Pro-Lok system were investigated (as above) in terms of interfractional and intrafractional setup errors and compressive abdominal breath control. To assess interfractional setup errors, a total of 42 patients who were given 5 to 20 fractions of helical tomotherapy for intrahepatic HCC were analyzed. Overall, 15 patients were immobilized using simple vacuum cushion (group A), and the Body Pro-Lok system was used inmore » 27 patients (group B), performing megavoltage computed tomography (MVCT) scans 196 times and 435 times, respectively. Pretreatment MVCT scans were registered to the planning kilovoltage computed tomography (KVCT) for error determination, and group comparisons were made. To establish intrafractional setup errors, 17 patients with intrahepatic HCC were selected at random for immobilization by Body Pro-Lok system, undergoing MVCT scans after helical tomotherapy every week. A total of 46 MVCT re-scans were analyzed for this purpose. In researching breath control, 12 patients, randomly selected, were immobilized by Body Pro-Lok system and subjected to 2-phase 4-dimensional CT (4DCT) scans, with compressive abdominal control or in freely breathing states, respectively. Respiratory-induced liver motion was then compared. Results: Mean interfractional setup errors were as follows: (1) group A: X, 2.97 ± 2.47 mm; Y, 4.85 ± 4.04 mm; and Z, 3.77 ± 3.21 mm; pitch, 0.66 ± 0.62°; roll, 1.09 ± 1.06°; and yaw, 0.85 ± 0.82°; and (2) group B: X, 2.23 ± 1.79 mm; Y, 4.10 ± 3.36 mm; and Z, 1.67 ± 1.91 mm; pitch, 0.45 ± 0.38°; roll, 0.77 ± 0.63°; and yaw, 0.52 ± 0.49°. Between-group differences were statistically significant in 6 directions (p < 0.05). Mean intrafractional setup errors with use of the Body Pro-Lok system were as follows: X, 0.41 ± 0.46 mm; Y, 0.86 ± 0.80 mm; Z, 0.33 ± 0.44 mm; and roll, 0.12 ± 0.19°. Mean liver-induced respiratory motion determinations were as follows: (1) abdominal compression: X, 2.33 ± 1.22 mm; Y, 5.11 ± 2.05 mm; Z, 2.13 ± 1.05 mm; and 3D vector, 6.22 ± 1.94 mm; and (2) free breathing: X, 3.48 ± 1.14 mm; Y, 9.83 ± 3.00 mm; Z, 3.38 ± 1.59 mm; and 3D vector, 11.07 ± 3.16 mm. Between-group differences were statistically different in 4 directions (p < 0.05). Conclusions: The Body Pro-Lok system is capable of improving interfractional and intrafractional setup accuracy and minimizing tumor movement owing to respirations in patients with intrahepatic HCC during hypofractionated helical tomotherapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gierga, David P., E-mail: dgierga@partners.org; Harvard Medical School, Boston, Massachusetts; Turcotte, Julie C.
2012-12-01
Purpose: Breath-hold (BH) treatments can be used to reduce cardiac dose for patients with left-sided breast cancer and unfavorable cardiac anatomy. A surface imaging technique was developed for accurate patient setup and reproducible real-time BH positioning. Methods and Materials: Three-dimensional surface images were obtained for 20 patients. Surface imaging was used to correct the daily setup for each patient. Initial setup data were recorded for 443 fractions and were analyzed to assess random and systematic errors. Real time monitoring was used to verify surface placement during BH. The radiation beam was not turned on if the BH position difference wasmore » greater than 5 mm. Real-time surface data were analyzed for 2398 BHs and 363 treatment fractions. The mean and maximum differences were calculated. The percentage of BHs greater than tolerance was calculated. Results: The mean shifts for initial patient setup were 2.0 mm, 1.2 mm, and 0.3 mm in the vertical, longitudinal, and lateral directions, respectively. The mean 3-dimensional vector shift was 7.8 mm. Random and systematic errors were less than 4 mm. Real-time surface monitoring data indicated that 22% of the BHs were outside the 5-mm tolerance (range, 7%-41%), and there was a correlation with breast volume. The mean difference between the treated and reference BH positions was 2 mm in each direction. For out-of-tolerance BHs, the average difference in the BH position was 6.3 mm, and the average maximum difference was 8.8 mm. Conclusions: Daily real-time surface imaging ensures accurate and reproducible positioning for BH treatment of left-sided breast cancer patients with unfavorable cardiac anatomy.« less
Radiotherapy setup displacements in breast cancer patients: 3D surface imaging experience.
Cravo Sá, Ana; Fermento, Ana; Neves, Dalila; Ferreira, Sara; Silva, Teresa; Marques Coelho, Carina; Vaandering, Aude; Roma, Ana; Quaresma, Sérgio; Bonnarens, Emmanuel
2018-01-01
In this study, we intend to compare two different setup procedures for female breast cancer patients. Imaging in radiotherapy provides a precise localization of the tumour, increasing the accuracy of the treatment delivery in breast cancer. Twenty breast cancer patients who underwent whole breast radiotherapy (WBRT) were selected for this study. Patients were divided into two groups of ten. Group one (G1) was positioned by tattoos and then the patient positioning was adjusted with the aid of AlignRT (Vision RT, London, UK). In group two (G2), patients were positioned only by tattoos. For both groups, the first 15 fractions were analyzed, a daily kilovoltage (kV) cone beam computed tomography (CBCT) image was made and then the rotational and translational displacements and, posteriorly, the systematic ( Σ ) and random ( σ ) errors were analyzed. The comparison of CBCT displacements for the two groups showed a statistically significant difference in the translational left-right (LR) direction ( ρ = 0.03), considering that the procedure with AlignRT system has smaller lateral displacements. The results of systematic ( Σ ) and random ( σ ) errors showed that for translational displacements the group positioned only by tattoos (G2) demonstrated higher values of errors when compared with the group positioned with the aid of AlignRT (G1). AlignRT could help the positioning of breast cancer patients; however, it should be used with another imaging method.
NASA Astrophysics Data System (ADS)
Vile, Douglas J.
In radiation therapy, interfraction organ motion introduces a level of geometric uncertainty into the planning process. Plans, which are typically based upon a single instance of anatomy, must be robust against daily anatomical variations. For this problem, a model of the magnitude, direction, and likelihood of deformation is useful. In this thesis, principal component analysis (PCA) is used to statistically model the 3D organ motion for 19 prostate cancer patients, each with 8-13 fractional computed tomography (CT) images. Deformable image registration and the resultant displacement vector fields (DVFs) are used to quantify the interfraction systematic and random motion. By applying the PCA technique to the random DVFs, principal modes of random tissue deformation were determined for each patient, and a method for sampling synthetic random DVFs was developed. The PCA model was then extended to describe the principal modes of systematic and random organ motion for the population of patients. A leave-one-out study tested both the systematic and random motion model's ability to represent PCA training set DVFs. The random and systematic DVF PCA models allowed the reconstruction of these data with absolute mean errors between 0.5-0.9 mm and 1-2 mm, respectively. To the best of the author's knowledge, this study is the first successful effort to build a fully 3D statistical PCA model of systematic tissue deformation in a population of patients. By sampling synthetic systematic and random errors, organ occupancy maps were created for bony and prostate-centroid patient setup processes. By thresholding these maps, PCA-based planning target volume (PTV) was created and tested against conventional margin recipes (van Herk for bony alignment and 5 mm fixed [3 mm posterior] margin for centroid alignment) in a virtual clinical trial for low-risk prostate cancer. Deformably accumulated delivered dose served as a surrogate for clinical outcome. For the bony landmark setup subtrial, the PCA PTV significantly (p<0.05) reduced D30, D20, and D5 to bladder and D50 to rectum, while increasing rectal D20 and D5. For the centroid-aligned setup, the PCA PTV significantly reduced all bladder DVH metrics and trended to lower rectal toxicity metrics. All PTVs covered the prostate with the prescription dose.
Wei, Xiaobo; Liu, Mengjiao; Ding, Yun; Li, Qilin; Cheng, Changhai; Zong, Xian; Yin, Wenming; Chen, Jie; Gu, Wendong
2018-05-08
Breast-conserving surgery (BCS) plus postoperative radiotherapy has become the standard treatment for early-stage breast cancer. The aim of this study was to compare the setup accuracy of optical surface imaging by the Sentinel system with cone-beam computerized tomography (CBCT) imaging currently used in our clinic for patients received BCS. Two optical surface scans were acquired before and immediately after couch movement correction. The correlation between the setup errors as determined by the initial optical surface scan and CBCT was analyzed. The deviation of the second optical surface scan from the reference planning CT was considered an estimate for the residual errors for the new method for patient setup correction. The consequences in terms for necessary planning target volume (PTV) margins for treatment sessions without setup correction applied. We analyzed 145 scans in 27 patients treated for early stage breast cancer. The setup errors of skin marker based patient alignment by optical surface scan and CBCT were correlated, and the residual setup errors as determined by the optical surface scan after couch movement correction were reduced. Optical surface imaging provides a convenient method for improving the setup accuracy for breast cancer patient without unnecessary imaging dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Dept of Radiation Oncology, New York Weill Cornell Medical Ctr, New York, NY
Purpose: To develop a generalized statistical model that incorporates the treatment uncertainty from the rotational error of single iso-center technique, and calculate the additional PTV (planning target volume) margin required to compensate for this error. Methods: The random vectors for setup and additional rotation errors in the three-dimensional (3D) patient coordinate system were assumed to follow the 3D independent normal distribution with zero mean, and standard deviations σx, σy, σz, for setup error and a uniform σR for rotational error. Both random vectors were summed, normalized and transformed to the spherical coordinates to derive the chi distribution with 3 degreesmore » of freedom for the radical distance ρ. PTV margin was determined using the critical value of this distribution for 0.05 significant level so that 95% of the time the treatment target would be covered by ρ. The additional PTV margin required to compensate for the rotational error was calculated as a function of σx, σy, σz and σR. Results: The effect of the rotational error is more pronounced for treatments that requires high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2mm PTV margin (or σx =σy=σz=0.7mm), a σR=0.32mm will decrease the PTV coverage from 95% to 90% of the time, or an additional 0.2mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σR>0.3mm will lead to an additional PTV margin that cannot be ignored, and the maximal σR that can be ignored is 0.0064 rad (or 0.37°) for iso-to-target distance=5cm, or 0.0032 rad (or 0.18°) for iso-to-target distance=10cm. Conclusions: The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the iso-center and target is large.« less
Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido
2015-02-01
In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.
Jeong, Songmi; Lee, Jong Hoon; Chung, Mi Joo; Lee, Sea Won; Lee, Jeong Won; Kang, Dae Gyu; Kim, Sung Hwan
2016-01-01
We evaluate geometric shifts of daily setup for evaluating the appropriateness of treatment and determining proper margins for the planning target volume (PTV) in prostate cancer patients.We analyzed 1200 sets of pretreatment megavoltage-CT scans that were acquired from 40 patients with intermediate to high-risk prostate cancer. They received whole pelvic intensity-modulated radiotherapy (IMRT). They underwent daily endorectal ballooning and enema to limit intrapelvic organ movement. The mean and standard deviation (SD) of daily translational shifts in right-to-left (X), anterior-to-posterior (Y), and superior-to-inferior (Z) were evaluated for systemic and random error.The mean ± SD of systemic error (Σ) in X, Y, Z, and roll was 2.21 ± 3.42 mm, -0.67 ± 2.27 mm, 1.05 ± 2.87 mm, and -0.43 ± 0.89°, respectively. The mean ± SD of random error (δ) was 1.95 ± 1.60 mm in X, 1.02 ± 0.50 mm in Y, 1.01 ± 0.48 mm in Z, and 0.37 ± 0.15° in roll. The calculated proper PTV margins that cover >95% of the target on average were 8.20 (X), 5.25 (Y), and 6.45 (Z) mm. Mean systemic geometrical shifts of IMRT were not statistically different in all transitional and three-dimensional shifts from early to late weeks. There was no grade 3 or higher gastrointestinal or genitourianry toxicity.The whole pelvic IMRT technique is a feasible and effective modality that limits intrapelvic organ motion and reduces setup uncertainties. Proper margins for the PTV can be determined by using geometric shifts data.
Jeong, Songmi; Lee, Jong Hoon; Chung, Mi Joo; Lee, Sea Won; Lee, Jeong Won; Kang, Dae Gyu; Kim, Sung Hwan
2016-01-01
Abstract We evaluate geometric shifts of daily setup for evaluating the appropriateness of treatment and determining proper margins for the planning target volume (PTV) in prostate cancer patients. We analyzed 1200 sets of pretreatment megavoltage-CT scans that were acquired from 40 patients with intermediate to high-risk prostate cancer. They received whole pelvic intensity-modulated radiotherapy (IMRT). They underwent daily endorectal ballooning and enema to limit intrapelvic organ movement. The mean and standard deviation (SD) of daily translational shifts in right-to-left (X), anterior-to-posterior (Y), and superior-to-inferior (Z) were evaluated for systemic and random error. The mean ± SD of systemic error (Σ) in X, Y, Z, and roll was 2.21 ± 3.42 mm, −0.67 ± 2.27 mm, 1.05 ± 2.87 mm, and −0.43 ± 0.89°, respectively. The mean ± SD of random error (δ) was 1.95 ± 1.60 mm in X, 1.02 ± 0.50 mm in Y, 1.01 ± 0.48 mm in Z, and 0.37 ± 0.15° in roll. The calculated proper PTV margins that cover >95% of the target on average were 8.20 (X), 5.25 (Y), and 6.45 (Z) mm. Mean systemic geometrical shifts of IMRT were not statistically different in all transitional and three-dimensional shifts from early to late weeks. There was no grade 3 or higher gastrointestinal or genitourianry toxicity. The whole pelvic IMRT technique is a feasible and effective modality that limits intrapelvic organ motion and reduces setup uncertainties. Proper margins for the PTV can be determined by using geometric shifts data. PMID:26765418
Yan, M; Lovelock, D; Hunt, M; Mechalakos, J; Hu, Y; Pham, H; Jackson, A
2013-12-01
To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or -0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1-2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39-16.8) cGy, or 10.1 (0.8-32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%-9.06%) and 10.2% (0.7%-63.6%), respectively. Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%.
Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.
2013-01-01
Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%. PMID:24320510
Hyperbolic Positioning with Antenna Arrays and Multi-Channel Pseudolite for Indoor Localization
Fujii, Kenjirou; Sakamoto, Yoshihiro; Wang, Wei; Arie, Hiroaki; Schmitz, Alexander; Sugano, Shigeki
2015-01-01
A hyperbolic positioning method with antenna arrays consisting of proximately-located antennas and a multi-channel pseudolite is proposed in order to overcome the problems of indoor positioning with conventional pseudolites (ground-based GPS transmitters). A two-dimensional positioning experiment using actual devices is conducted. The experimental result shows that the positioning accuracy varies centimeter- to meter-level according to the geometric relation between the pseudolite antennas and the receiver. It also shows that the bias error of the carrier-phase difference observables is more serious than their random error. Based on the size of the bias error of carrier-phase difference that is inverse-calculated from the experimental result, three-dimensional positioning performance is evaluated by computer simulation. In addition, in the three-dimensional positioning scenario, an initial value convergence analysis of the non-linear least squares is conducted. Its result shows that initial values that can converge to a right position exist at least under the proposed antenna setup. The simulated values and evaluation methods introduced in this work can be applied to various antenna setups; therefore, by using them, positioning performance can be predicted in advance of installing an actual system. PMID:26437405
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Topolnjak, Rajko; Borst, Gerben R.; Nijkamp, Jasper
Purpose: To quantify the geometrical uncertainties for the heart during radiotherapy treatment of left-sided breast cancer patients and to determine and validate planning organ at risk volume (PRV) margins. Methods and Materials: Twenty-two patients treated in supine position in 28 fractions with regularly acquired cone-beam computed tomography (CBCT) scans for offline setup correction were included. Retrospectively, the CBCT scans were reconstructed into 10-phase respiration correlated four-dimensional scans. The heart was registered in each breathing phase to the planning CT scan to establish the respiratory heart motion during the CBCT scan ({sigma}{sub resp}). The average of the respiratory motion was calculatedmore » as the heart displacement error for a fraction. Subsequently, the systematic ({Sigma}), random ({sigma}), and total random ({sigma}{sub tot}={radical}({sigma}{sup 2}+{sigma}{sub resp}{sup 2})) errors of the heart position were calculated. Based on the errors a PRV margin for the heart was calculated to ensure that the maximum heart dose (D{sub max}) is not underestimated in at least 90% of the cases (M{sub heart} = 1.3{Sigma}-0.5{sigma}{sub tot}). All analysis were performed in left-right (LR), craniocaudal (CC), and anteroposterior (AP) directions with respect to both online and offline bony anatomy setup corrections. The PRV margin was validated by accumulating the dose to the heart based on the heart registrations and comparing the planned PRV D{sub max} to the accumulated heart D{sub max}. Results: For online setup correction, the cardiac geometrical uncertainties and PRV margins were N-Ary-Summation = 2.2/3.2/2.1 mm, {sigma} = 2.1/2.9/1.4 mm, and M{sub heart} = 1.6/2.3/1.3 mm for LR/CC/AP, respectively. For offline setup correction these were N-Ary-Summation = 2.4/3.7/2.2 mm, {sigma} = 2.9/4.1/2.7 mm, and M{sub heart} = 1.6/2.1/1.4 mm. Cardiac motion induced by breathing was {sigma}{sub resp} = 1.4/2.9/1.4 mm for LR/CC/AP. The PRV D{sub max} underestimated the accumulated heart D{sub max} for 9.1% patients using online and 13.6% patients using offline bony anatomy setup correction, which validated that PRV margin size was adequate. Conclusion: Considerable cardiac position variability relative to the bony anatomy was observed in breast cancer patients. A PRV margin can be used during treatment planning to take these uncertainties into account.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirose, T; Arimura, H; Oga, S
2016-06-15
Purpose: The purpose of this study was to investigate the impact of planning target volume (PTV) margins with taking into consideration clinical target volume (CTV) shape variations on treatment plans of intensity modulated radiation therapy (IMRT) for prostate cancer. Methods: The systematic errors and the random errors for patient setup errors in right-left (RL), anterior-posterior (AP), and superior-inferior (SI) directions were obtained from data of 20 patients, and those for CTV shape variations were calculated from 10 patients, who were weekly scanned using cone beam computed tomography (CBCT). The setup error was defined as the difference in prostate centers betweenmore » planning CT and CBCT images after bone-based registrations. CTV shape variations of high, intermediate and low risk CTVs were calculated for each patient from variances of interfractional shape variations on each vertex of three-dimensional CTV point distributions, which were manually obtained from CTV contours on the CBCT images. PTV margins were calculated using the setup errors with and without CTV shape variations for each risk CTV. Six treatment plans were retrospectively made by using the PTV margins with and without CTV shape variations for the three risk CTVs of 5 test patients. Furthermore, the treatment plans were applied to CBCT images for investigating the impact of shape variations on PTV margins. Results: The percentages of population to cover with the PTV, which satisfies the CTV D98 of 95%, with and without the shape variations were 89.7% and 74.4% for high risk, 89.7% and 76.9% for intermediate risk, 84.6% and 76.9% for low risk, respectively. Conclusion: PTV margins taking into account CTV shape variation provide significant improvement of applicable percentage of population (P < 0.05). This study suggested that CTV shape variation should be taken consideration into determination of the PTV margins.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristophanous, M; Court, L
Purpose: Despite daily image guidance setup uncertainties can be high when treating large areas of the body. The aim of this study was to measure local uncertainties inside the PTV for patients receiving IMRT to the mediastinum region. Methods: Eleven lymphoma patients that received radiotherapy (breath-hold) to the mediastinum were included in this study. The treated region could range all the way from the neck to the diaphragm. Each patient had a CT scan with a CT-on-rails system prior to every treatment. The entire PTV region was matched to the planning CT using automatic rigid registration. The PTV was thenmore » split into 5 regions: neck, supraclavicular, superior mediastinum, upper heart, lower heart. Additional auto-registrations for each of the 5 local PTV regions were performed. The residual local setup errors were calculated as the difference between the final global PTV position and the individual final local PTV positions for the AP, SI and RL directions. For each patient 4 CT scans were analyzed (1 per week of treatment). Results: The residual mean group error (M) and standard deviation of the inter-patient (or systematic) error (Σ) were lowest in the RL direction of the superior mediastinum (0.0mm and 0.5mm) and highest in the RL direction of the lower heart (3.5mm and 2.9mm). The standard deviation of the inter-fraction (or random) error (σ) was lowest in the RL direction of the superior mediastinum (0.5mm) and highest in the SI direction of the lower heart (3.9mm) The directionality of local uncertainties is important; a superior residual error in the lower heart for example keeps it in the global PTV. Conclusion: There is a complex relationship between breath-holding and positioning uncertainties that needs further investigation. Residual setup uncertainties can be significant even under daily CT image guidance when treating large regions of the body.« less
Couch height–based patient setup for abdominal radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohira, Shingo; Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita; Ueda, Yoshihiro
2016-04-01
There are 2 methods commonly used for patient positioning in the anterior-posterior (A-P) direction: one is the skin mark patient setup method (SMPS) and the other is the couch height–based patient setup method (CHPS). This study compared the setup accuracy of these 2 methods for abdominal radiation therapy. The enrollment for this study comprised 23 patients with pancreatic cancer. For treatments (539 sessions), patients were set up by using isocenter skin marks and thereafter treatment couch was shifted so that the distance between the isocenter and the upper side of the treatment couch was equal to that indicated on themore » computed tomographic (CT) image. Setup deviation in the A-P direction for CHPS was measured by matching the spine of the digitally reconstructed radiograph (DRR) of a lateral beam at simulation with that of the corresponding time-integrated electronic portal image. For SMPS with no correction (SMPS/NC), setup deviation was calculated based on the couch-level difference between SMPS and CHPS. SMPS/NC was corrected using 2 off-line correction protocols: no action level (SMPS/NAL) and extended NAL (SMPS/eNAL) protocols. Margins to compensate for deviations were calculated using the Stroom formula. A-P deviation > 5 mm was observed in 17% of SMPS/NC, 4% of SMPS/NAL, and 4% of SMPS/eNAL sessions but only in one CHPS session. For SMPS/NC, 7 patients (30%) showed deviations at an increasing rate of > 0.1 mm/fraction, but for CHPS, no such trend was observed. The standard deviations (SDs) of systematic error (Σ) were 2.6, 1.4, 0.6, and 0.8 mm and the root mean squares of random error (σ) were 2.1, 2.6, 2.7, and 0.9 mm for SMPS/NC, SMPS/NAL, SMPS/eNAL, and CHPS, respectively. Margins to compensate for the deviations were wide for SMPS/NC (6.7 mm), smaller for SMPS/NAL (4.6 mm) and SMPS/eNAL (3.1 mm), and smallest for CHPS (2.2 mm). Achieving better setup with smaller margins, CHPS appears to be a reproducible method for abdominal patient setup.« less
Initial clinical experience with a video-based patient positioning system.
Johnson, L S; Milliken, B D; Hadley, S W; Pelizzari, C A; Haraf, D J; Chen, G T
1999-08-01
To report initial clinical experience with an interactive, video-based patient positioning system that is inexpensive, quick, accurate, and easy to use. System hardware includes two black-and-white CCD cameras, zoom lenses, and a PC equipped with a frame grabber. Custom software is used to acquire and archive video images, as well as to display real-time subtraction images revealing patient misalignment in multiple views. Two studies are described. In the first study, video is used to document the daily setup histories of 5 head and neck patients. Time-lapse cine loops are generated for each patient and used to diagnose and correct common setup errors. In the second study, 6 twice-daily (BID) head and neck patients are positioned according to the following protocol: at AM setups conventional treatment room lasers are used; at PM setups lasers are used initially and then video is used for 1-2 minutes to fine-tune the patient position. Lateral video images and lateral verification films are registered off-line to compare the distribution of setup errors per patient, with and without video assistance. In the first study, video images were used to determine the accuracy of our conventional head and neck setup technique, i.e., alignment of lightcast marks and surface anatomy to treatment room lasers and the light field. For this initial cohort of patients, errors ranged from sigma = 5 to 7 mm and were patient-specific. Time-lapse cine loops of the images revealed sources of the error, and as a result, our localization techniques and immobilization device were modified to improve setup accuracy. After the improvements, conventional setup errors were reduced to sigma = 3 to 5 mm. In the second study, when a stereo pair of live subtraction images were introduced to perform daily "on-line" setup correction, errors were reduced to sigma = 1 to 3 mm. Results depended on patient health and cooperation and the length of time spent fine-tuning the position. An interactive, video-based patient positioning system was shown to reduce setup errors to within 1 to 3 mm in head and neck patients, without a significant increase in overall treatment time or labor-intensive procedures. Unlike retrospective portal image analysis, use of two live-video images provides the therapists with immediate feedback and allows for true 3-D positioning and correction of out-of-plane rotation before radiation is delivered. With significant improvement in head and neck alignment and the elimination of setup errors greater than 3 to 5 mm, margins associated with treatment volumes potentially can be reduced, thereby decreasing normal tissue irradiation.
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy.
Boswell, Sarah A; Jeraj, Robert; Ruchala, Kenneth J; Olivera, Gustavo H; Jaradat, Hazim A; James, Joshua A; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T Rock
2005-06-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, W; Yang, H; Wang, Y
2014-06-01
Purpose: To investigate the impact of different clipbox volumes with automated registration techniques using commercially available software with on board volumetric imaging(OBI) for treatment verification in cervical cancer patients. Methods: Fifty cervical cancer patients received daily CBCT scans(on-board imaging v1.5 system, Varian Medical Systems) during the first treatment week and weekly thereafter were included this analysis. A total of 450 CBCT scans were registered to the planning CTscan using pelvic clipbox(clipbox-Pelvic) and around PTV clip box(clipbox- PTV). The translations(anterior-posterior, left-right, superior-inferior) and the rotations(yaw, pitch and roll) errors for each matches were recorded. The setup errors and the systematic andmore » random errors for both of the clip-boxes were calculated. Paired Samples t test was used to analysis the differences between clipbox-Pelvic and clipbox-PTV. Results: . The SD of systematic error(σ) was 1.0mm, 2.0mm,3.2mm and 1.9mm,2.3mm, 3.0mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. The average random error(Σ)was 1.7mm, 2.0mm,4.2mm and 1.7mm,3.4mm, 4.4mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. But, only the SI direction was acquired significantly differences between two image registration volumes(p=0.002,p=0.01 for mean and SD). For rotations, the yaw mean/SD and the pitch SD were acquired significantly differences between clipbox-Pelvic and clipbox-PTV. Conclusion: The defined volume for Image registration is important for cervical cancer when 3D/3D match was used. The alignment clipbox can effect the setup errors obtained. Further analysis is need to determine the optimal defined volume to use the image registration in cervical cancer. Conflict of interest: none.« less
Hansen, Helle; Nielsen, Berit Kjærside; Boejen, Annette; Vestergaard, Anne
2018-06-01
The aim of this study was to investigate if teaching patients about positioning before radiotherapy treatment would (a) reduce the residual rotational set-up errors, (b) reduce the number of repositionings and (c) improve patients' sense of control by increasing self-efficacy and reducing distress. Patients were randomized to either standard care (control group) or standard care and a teaching session combining visual aids and practical exercises (intervention group). Daily images from the treatment sessions were evaluated off-line. Both groups filled in a questionnaire before and at the end of the treatment course on various aspects of cooperation with the staff regarding positioning. Comparisons of residual rotational set-up errors showed an improvement in the intervention group compared to the control group. No significant differences were found in number of repositionings, self-efficacy or distress. Results show that it is possible to teach patients about positioning and thereby improve precision in positioning. Teaching patients about positioning did not seem to affect self-efficacy or distress scores at baseline and at the end of the treatment course.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B
NASA Astrophysics Data System (ADS)
Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete
2018-04-01
The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.
Manzanilla-Granados, Héctor M; Saint-Martín, Humberto; Fuentes-Azcatl, Raúl; Alejandre, José
2015-07-02
The solubility of NaCl, an equilibrium between a saturated solution of ions and a solid with a crystalline structure, was obtained from molecular dynamics simulations using the SPC/E and TIP4P-Ew water models. Four initial setups on supersaturated systems were tested on sodium chloride (NaCl) solutions to determine the equilibrium conditions and computational performance: (1) an ionic solution confined between two crystal plates of periodic NaCl, (2) a solution with all the ions initially distributed randomly, (3) a nanocrystal immersed in pure water, and (4) a nanocrystal immersed in an ionic solution. In some cases, the equilibration of the system can take several microseconds. The results from this work showed that the solubility of NaCl was the same, within simulation error, for the four setups, and in agreement with previously reported values from simulations with the setup (1). The system of a nanocrystal immersed in supersaturated solution was found to equilibrate faster than others. In agreement with laser-Doppler droplet measurements, at equilibrium with the solution the crystals in all the setups had a slight positive charge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imura, K; Fujibuchi, T; Hirata, H
Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual user to indicate efficient positional correction methods easily.« less
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoiber, Eva Maria, E-mail: eva.stoiber@med.uni-heidelberg.de; Department of Medical Physics, German Cancer Research Center, Heidelberg; Giske, Kristina
Purpose: To evaluate local positioning errors of the lumbar spine during fractionated intensity-modulated radiotherapy of patients treated with craniospinal irradiation and to assess the impact of rotational error correction on these uncertainties for one patient setup correction strategy. Methods and Materials: 8 patients (6 adults, 2 children) treated with helical tomotherapy for craniospinal irradiation were retrospectively chosen for this analysis. Patients were immobilized with a deep-drawn Aquaplast head mask. Additionally to daily megavoltage control computed tomography scans of the skull, once-a-week positioning of the lumbar spine was assessed. Therefore, patient setup was corrected by a target point correction, derived frommore » a registration of the patient's skull. The residual positioning variations of the lumbar spine were evaluated applying a rigid-registration algorithm. The impact of different rotational error corrections was simulated. Results: After target point correction, residual local positioning errors of the lumbar spine varied considerably. Craniocaudal axis rotational error correction did not improve or deteriorate these translational errors, whereas simulation of a rotational error correction of the right-left and anterior-posterior axis increased these errors by a factor of 2 to 3. Conclusion: The patient fixation used allows for deformations between the patient's skull and spine. Therefore, for the setup correction strategy evaluated in this study, generous margins for the lumbar spinal target volume are needed to prevent a local geographic miss. With any applied correction strategy, it needs to be evaluated whether or not a rotational error correction is beneficial.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jani, S; Low, D; Lamb, J
2015-06-15
Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments weremore » simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and automatically detected using 3D setup images of two imaging modalities across three commonly-treated anatomical sites.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhakar, Ramachandran; Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi; Department of Radiology, All India Institute of Medical Sciences, New Delhi
Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL),more » contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.« less
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
Jin, Peng; van der Horst, Astrid; de Jong, Rianne; van Hooft, Jeanin E; Kamphuis, Martijn; van Wieringen, Niek; Machiels, Melanie; Bel, Arjan; Hulshof, Maarten C C M; Alderliesten, Tanja
2015-12-01
The aim of this study was to quantify interfractional esophageal tumor position variation using markers and investigate the use of markers for setup verification. Sixty-five markers placed in the tumor volumes of 24 esophageal cancer patients were identified in computed tomography (CT) and follow-up cone-beam CT. For each patient we calculated pairwise distances between markers over time to evaluate geometric tumor volume variation. We then quantified marker displacements relative to bony anatomy and estimated the variation of systematic (Σ) and random errors (σ). During bony anatomy-based setup verification, we visually inspected whether the markers were inside the planning target volume (PTV) and attempted marker-based registration. Minor time trends with substantial fluctuations in pairwise distances implied tissue deformation. Overall, Σ(σ) in the left-right/cranial-caudal/anterior-posterior direction was 2.9(2.4)/4.1(2.4)/2.2(1.8) mm; for the proximal stomach, it was 5.4(4.3)/4.9(3.2)/1.9(2.4) mm. After bony anatomy-based setup correction, all markers were inside the PTV. However, due to large tissue deformation, marker-based registration was not feasible. Generally, the interfractional position variation of esophageal tumors is more pronounced in the cranial-caudal direction and in the proximal stomach. Currently, marker-based setup verification is not feasible for clinical routine use, but markers can facilitate the setup verification by inspecting whether the PTV covers the tumor volume adequately. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briscoe, M; Ploquin, N; Voroney, JP
2015-06-15
Purpose: To quantify the effect of patient rotation in stereotactic radiation therapy and establish a threshold where rotational patient set-up errors have a significant impact on target coverage. Methods: To simulate rotational patient set-up errors, a Matlab code was created to rotate the patient dose distribution around the treatment isocentre, located centrally in the lesion, while keeping the structure contours in the original locations on the CT and MRI. Rotations of 1°, 3°, and 5° for each of the pitch, roll, and yaw, as well as simultaneous rotations of 1°, 3°, and 5° around all three axes were applied tomore » two types of brain lesions: brain metastasis and acoustic neuroma. In order to analyze multiple tumour shapes, these plans included small spherical (metastasis), elliptical (acoustic neuroma), and large irregular (metastasis) tumour structures. Dose-volume histograms and planning target volumes were compared between the planned patient positions and those with simulated rotational set-up errors. The RTOG conformity index for patient rotation was also investigated. Results: Examining the tumour volumes that received 80% of the prescription dose in the planned and rotated patient positions showed decreases in prescription dose coverage of up to 2.3%. Conformity indices for treatments with simulated rotational errors showed decreases of up to 3% compared to the original plan. For irregular lesions, degradation of 1% of the target coverage can be seen for rotations as low as 3°. Conclusions: This data shows that for elliptical or spherical targets, rotational patient set-up errors less than 3° around any or all axes do not have a significant impact on the dose delivered to the target volume or the conformity index of the plan. However the same rotational errors would have an impact on plans for irregular tumours.« less
Irradiation setup at the U-120M cyclotron facility
NASA Astrophysics Data System (ADS)
Křížek, F.; Ferencei, J.; Matlocha, T.; Pospíšil, J.; Príbeli, P.; Raskina, V.; Isakov, A.; Štursa, J.; Vaňát, T.; Vysoká, K.
2018-06-01
This paper describes parameters of the proton beams provided by the U-120M cyclotron and the related irradiation setup at the open access irradiation facility at the Nuclear Physics Institute of the Czech Academy of Sciences. The facility is suitable for testing radiation hardness of various electronic components. The use of the setup is illustrated by a measurement of an error rate for errors caused by Single Event Transients in an SRAM-based Xilinx XC3S200 FPGA. This measurement provides an estimate of a possible occurrence of Single Event Transients. Data suggest that the variation of error rate of the Single Event Effects for different clock phase shifts is not significant enough to use clock phase alignment with the beam as a fault mitigation technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Elizabeth S.; Prosnitz, Robert G.; Yu Xiaoli
2006-11-15
Purpose: The aim of this study was to assess the impact of patient-specific factors, left ventricle (LV) volume, and treatment set-up errors on the rate of perfusion defects 6 to 60 months post-radiation therapy (RT) in patients receiving tangential RT for left-sided breast cancer. Methods and Materials: Between 1998 and 2005, a total of 153 patients were enrolled onto an institutional review board-approved prospective study and had pre- and serial post-RT (6-60 months) cardiac perfusion scans to assess for perfusion defects. Of the patients, 108 had normal pre-RT perfusion scans and available follow-up data. The impact of patient-specific factors onmore » the rate of perfusion defects was assessed at various time points using univariate and multivariate analysis. The impact of set-up errors on the rate of perfusion defects was also analyzed using a one-tailed Fisher's Exact test. Results: Consistent with our prior results, the volume of LV in the RT field was the most significant predictor of perfusion defects on both univariate (p = 0.0005 to 0.0058) and multivariate analysis (p = 0.0026 to 0.0029). Body mass index (BMI) was the only significant patient-specific factor on both univariate (p = 0.0005 to 0.022) and multivariate analysis (p = 0.0091 to 0.05). In patients with very small volumes of LV in the planned RT fields, the rate of perfusion defects was significantly higher when the fields set-up 'too deep' (83% vs. 30%, p = 0.059). The frequency of deep set-up errors was significantly higher among patients with BMI {>=}25 kg/m{sup 2} compared with patients of normal weight (47% vs. 28%, p = 0.068). Conclusions: BMI {>=}25 kg/m{sup 2} may be a significant risk factor for cardiac toxicity after RT for left-sided breast cancer, possibly because of more frequent deep set-up errors resulting in the inclusion of additional heart in the RT fields. Further study is necessary to better understand the impact of patient-specific factors and set-up errors on the development of RT-induced perfusion defects.« less
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
Immobilisation precision in VMAT for oral cancer patients
NASA Astrophysics Data System (ADS)
Norfadilah, M. N.; Ahmad, R.; Heng, S. P.; Lam, K. S.; Radzi, A. B. Ahmad; John, L. S. H.
2017-05-01
A study was conducted to evaluate and quantify a precision of the interfraction setup with different immobilisation devices throughout the treatment time. Local setup accuracy was analysed for 8 oral cancer patients receiving radiotherapy; 4 with HeadFIX® mouthpiece moulded with wax (HFW) and 4 with 10 ml/cc syringe barrel (SYR). Each patients underwent Image Guided Radiotherapy (IGRT) with total of 209 cone-beam computed tomography (CBCT) data sets for position set up errors measurement. The setup variations in the mediolateral (ML), craniocaudal (CC), and anteroposterior (AP) dimensions were measured. Overall mean displacement (M), the population systematic (Σ) and random (σ) errors and the 3D vector length were calculated. Clinical target volume to planning target volume (CTV-PTV) margins were calculated according to the van Herk formula (2.5Σ+0.7σ). The M values for both group were < 1 mm and < 1° in all translational and rotational directions. This indicate there is no significant imprecision in the equipment (lasers) and during procedure. The interfraction translational 3 dimension vector for HFW and SYR were 1.93±0.66mm and 3.84±1.34mm, respectively. The interfraction average rotational error were 0.00°±0.65° and 0.34°±0.59°, respectively. CTV-PTV margins along the 3 translational axis (Right-Left, Superior-Inferior, Anterior-Posterior) calculated were 3.08, 2.22 and 0.81 mm for HFW and 3.76, 6.24 and 5.06 mm for SYR. The results of this study have demonstrated that HFW more precise in reproducing patient position compared to conventionally used SYR (p<0.001). All margin calculated did not exceed hospital protocol (5mm) except S-I and A-P axes using syringe. For this reason, a daily IGRT is highly recommended to improve the immobilisation precision.
Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi
2012-09-01
Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.
Lamb, James M; Agazaryan, Nzhde; Low, Daniel A
2013-10-01
To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.
Jani, Shyam S; Low, Daniel A; Lamb, James M
2015-01-01
To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Grochola, Lukasz Filip; Soll, Christopher; Zehnder, Adrian; Wyss, Roland; Herzog, Pascal; Breitenstein, Stefan
2017-02-09
Recent advances in robotic technology suggest that the utilization of the da Vinci Single-Site™ platform for cholecystectomy is safe, feasible and results in a shorter learning curve compared to conventional single-incision laparoscopic cholecystectomy. Moreover, the robot-assisted technology has been shown to reduce the surgeon's stress load compared to standard single-incision laparoscopy in an experimental setup, suggesting an important advantage of the da Vinci platform. However, the above-mentioned observations are based solely on case series, case reports and experimental data, as high-quality clinical trials to demonstrate the benefits of the da Vinci Single-Site™ cholecystectomy have not been performed to date. This study addresses the question whether robot-assisted Single-Site™ cholecystectomy provides significant benefits over single-incision laparoscopic cholecystectomy in terms of surgeon's stress load, while matching the standards of the conventional single-incision approach with regard to peri- and postoperative outcomes. It is designed as a single centre, single-blinded randomized controlled trial, which compares both surgical approaches with the primary endpoint surgeon's physical and mental stress load at the time of surgery. In addition, the study aims to assess secondary endpoints such as operating time, conversion rates, additional trocar placement, intra-operative blood loss, length of hospital stay, costs of procedure, health-related quality of life, cosmesis and complications. Patients as well as ward staff are blinded until the 1 st postoperative year. Sample size calculation based on the results of a previously published experimental setup utilizing an estimated effect size of surgeon's comfort of 0.8 (power of 0.8, alpha-error level of 0.05, error margin of 10-15%) resulted in a number of 30 randomized patients per arm. The study is the first randomized controlled trial that compares the da Vinci Single Site™ platform to conventional laparoscopic approaches in cholecystectomy, one of the most frequently performed operations in general surgery. This trial is registered at clinicaltrials.gov (trial number: NCT02485392 ). Registered February 19, 2015.
Richmond, N D; Pilling, K E; Peedell, C; Shakespeare, D; Walker, C P
2012-01-01
Stereotactic body radiotherapy for early stage non-small cell lung cancer is an emerging treatment option in the UK. Since relatively few high-dose ablative fractions are delivered to a small target volume, the consequences of a geometric miss are potentially severe. This paper presents the results of treatment delivery set-up data collected using Elekta Synergy (Elekta, Crawley, UK) cone-beam CT imaging for 17 patients immobilised using the Bodyfix system (Medical Intelligence, Schwabmuenchen, Germany). Images were acquired on the linear accelerator at initial patient treatment set-up, following any position correction adjustments, and post-treatment. These were matched to the localisation CT scan using the Elekta XVI software. In total, 71 fractions were analysed for patient set-up errors. The mean vector error at initial set-up was calculated as 5.3±2.7 mm, which was significantly reduced to 1.4±0.7 mm following image guided correction. Post-treatment the corresponding value was 2.1±1.2 mm. The use of the Bodyfix abdominal compression plate on 5 patients to reduce the range of tumour excursion during respiration produced mean longitudinal set-up corrections of −4.4±4.5 mm compared with −0.7±2.6 mm without compression for the remaining 12 patients. The use of abdominal compression led to a greater variation in set-up errors and a shift in the mean value. PMID:22665927
SU-E-J-245: Is Off-Line Adaptive Radiotherapy Sufficient for Head and Neck Cancer with IGRT?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Z; Cleveland Clinic, Cleveland, OH; Shang, Q
2014-06-01
Purpose: Radiation doses delivered to patients with head and neck cancer (HN) may deviate from the planned doses because of variations in patient setup and anatomy. This study was to evaluate whether off-line Adaptive Radiotherapy (ART) is sufficient. Methods: Ten HN patients, who received IMRT under daily imaging guidance using CT-on-rail/KV-CBCT, were randomly selected for this study. For each patient, the daily treatment setup was corrected with translational only directions. Sixty weekly verification CTs were retrospectively analyzed. On these weekly verification CTs, the tumor volumes and OAR contours were manually delineated by a physician. With the treatment iso-center placed onmore » the verification CTs, according to the recorded clinical shifts, the treatment beams from the original IMRT plans were then applied to these CTs to calculate the delivered doses. The electron density of the planning CTs and weekly CTs were overridden to 1 g/cm3. Results: Among 60 fractions, D99 of the CTVs in 4 fractions decreased more than 5% of the planned doses. The maximum dose of the spinal cord exceeded 10% of the planned values in 2 fractions. A close examination indicated that the dose discrepancy in these 6 fractions was due to patient rotations, especially shoulder rotations. After registering these 6 CTs with the planning CT allowing six degree of freedoms, the maximum rotations around 3 axes were > 1.5° for these fractions. With rotation setup errors removed, 4 out of 10 patients still required off-line ART to accommodate anatomical changes. Conclusion: A significant shoulder rotations were observed in 10% fractions, requiring patient re-setup. Off-line ART alone is not sufficient to correct for random variations of patient position, although ART is effective to adapt to patients' gradual anatomic changes. Re-setup or on-line ART may be considered for patients with large deviations detected early by daily IGRT images. The study is supported in part by Siemens Medical Solutions.« less
Prevention of gross setup errors in radiotherapy with an efficient automatic patient safety system.
Yan, Guanghua; Mittauer, Kathryn; Huang, Yin; Lu, Bo; Liu, Chihray; Li, Jonathan G
2013-11-04
Treatment of the wrong body part due to incorrect setup is among the leading types of errors in radiotherapy. The purpose of this paper is to report an efficient automatic patient safety system (PSS) to prevent gross setup errors. The system consists of a pair of charge-coupled device (CCD) cameras mounted in treatment room, a single infrared reflective marker (IRRM) affixed on patient or immobilization device, and a set of in-house developed software. Patients are CT scanned with a CT BB placed over their surface close to intended treatment site. Coordinates of the CT BB relative to treatment isocenter are used as reference for tracking. The CT BB is replaced with an IRRM before treatment starts. PSS evaluates setup accuracy by comparing real-time IRRM position with reference position. To automate system workflow, PSS synchronizes with the record-and-verify (R&V) system in real time and automatically loads in reference data for patient under treatment. Special IRRMs, which can permanently stick to patient face mask or body mold throughout the course of treatment, were designed to minimize therapist's workload. Accuracy of the system was examined on an anthropomorphic phantom with a designed end-to-end test. Its performance was also evaluated on head and neck as well as abdominalpelvic patients using cone-beam CT (CBCT) as standard. The PSS system achieved a seamless clinic workflow by synchronizing with the R&V system. By permanently mounting specially designed IRRMs on patient immobilization devices, therapist intervention is eliminated or minimized. Overall results showed that the PSS system has sufficient accuracy to catch gross setup errors greater than 1 cm in real time. An efficient automatic PSS with sufficient accuracy has been developed to prevent gross setup errors in radiotherapy. The system can be applied to all treatment sites for independent positioning verification. It can be an ideal complement to complex image-guidance systems due to its advantages of continuous tracking ability, no radiation dose, and fully automated clinic workflow.
SU-F-T-642: Sub Millimeter Accurate Setup of More Than Three Vertebrae in Spinal SBRT with 6D Couch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, X; Zhao, Z; Yang, J
Purpose: To assess the initial setup accuracy in treating more than 3 vertebral body levels in spinal SBRT using a 6D couch. Methods: We retrospectively analyzed last 20 spinal SBRT patients (4 cervical, 9 thoracic, 7 lumbar/sacrum) treated in our clinic. These patients in customized immobilization device were treated in 1 or 3 fractions. Initial setup used ExacTrac and Brainlab 6D couch to align target within 1 mm and 1 degree, following by a cone beam CT (CBCT) for verification. Our current standard practice allows treating a maximum of three continuous vertebrae. Here we assess the possibility to achieve submore » millimeter setup accuracy for more than three vertebrae by examining the residual error in every slice of CBCT. The CBCT had a range of 17.5 cm, which covered 5 to 9 continuous vertebrae depending on the patient and target location. In the study, CBCT from the 1st fraction treatment was rigidly registered with the planning CT in Pinnacle. The residual setup error of a vertebra was determined by expanding the vertebra contour on the planning CT to be large enough to enclose the corresponding vertebra on CBCT. The margin of the expansion was considered as setup error. Results: Out of the 20 patients analyzed, initial setup accuracy can be achieved within 1 mm for a span of 5 or more vertebrae starting from T2 vertebra to inferior vertebra levels. 2 cervical and 2 upper thoracic patients showed the cervical spine was difficult to achieve sub millimeter accuracy for multi levels without a customized immobilization headrest. Conclusion: If the curvature of spinal columns can be reproduced in customized immobilization device during treatment as simulation, multiple continuous vertebrae can be setup within 1 mm with the use of a 6D couch.« less
Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module
NASA Astrophysics Data System (ADS)
Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge
2017-04-01
A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less
Feedforward operation of a lens setup for large defocus and astigmatism correction
NASA Astrophysics Data System (ADS)
Verstraete, Hans R. G. W.; Almasian, MItra; Pozzi, Paolo; Bilderbeek, Rolf; Kalkman, Jeroen; Faber, Dirk J.; Verhaegen, Michel
2016-04-01
In this manuscript, we present a lens setup for large defocus and astigmatism correction. A deformable defocus lens and two rotational cylindrical lenses are used to control the defocus and astigmatism. The setup is calibrated using a simple model that allows the calculation of the lens inputs so that a desired defocus and astigmatism are actuated on the eye. The setup is tested by determining the feedforward prediction error, imaging a resolution target, and removing introduced aberrations.
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less
NASA Astrophysics Data System (ADS)
Wu, D.; Lin, J. C.; Oda, T.; Ye, X.; Lauvaux, T.; Yang, E. G.; Kort, E. A.
2017-12-01
Urban regions are large emitters of CO2 whose emission inventories are still associated with large uncertainties. Therefore, a strong need exists to better quantify emissions from megacities using a top-down approach. Satellites — e.g., the Orbiting Carbon Observatory 2 (OCO-2), provide a platform for monitoring spatiotemporal column CO2 concentrations (XCO2). In this study, we present a Lagrangian receptor-oriented model framework and evaluate "model-retrieved" XCO2 by comparing against OCO-2-retrieved XCO2, for three megacities/regions (Riyadh, Cairo and Pearl River Delta). OCO-2 soundings indicate pronounced XCO2 enhancements (dXCO2) when crossing Riyadh, which are successfully captured by our model with a slight latitude shift. From this model framework, we can identify and compare the relative contributions of dXCO2 resulted from anthropogenic emission versus biospheric fluxes. In addition, to impose constraints on emissions for Riyadh through inversion methods, three uncertainties sources are addressed in this study, including 1) transport errors, 2) receptor and model setups in atmospheric models, and 3) urban emission uncertainties. For 1), we calculate transport errors by adding a wind error component to randomize particle distributions. For 2), a set of sensitivity tests using bootstrap method is performed to describe proper ways to setup receptors in Lagrangian models. For 3), both emission uncertainties from the Fossil Fuel Data Assimilation System (FFDAS) and the spread among three emission inventories are used to approximate an overall fractional uncertainty in modeled anthropogenic signal (dXCO2.anthro). Lastly, we investigate the definition of background (clean) XCO2 for megacities from retrieved XCO2 by means of statistical tools and our model framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qinghui; Chan, Maria F.; Burman, Chandra
2013-12-15
Purpose: Setting a proper margin is crucial for not only delivering the required radiation dose to a target volume, but also reducing the unnecessary radiation to the adjacent organs at risk. This study investigated the independent one-dimensional symmetric and asymmetric margins between the clinical target volume (CTV) and the planning target volume (PTV) for linac-based single-fraction frameless stereotactic radiosurgery (SRS).Methods: The authors assumed a Dirac delta function for the systematic error of a specific machine and a Gaussian function for the residual setup errors. Margin formulas were then derived in details to arrive at a suitable CTV-to-PTV margin for single-fractionmore » frameless SRS. Such a margin ensured that the CTV would receive the prescribed dose in 95% of the patients. To validate our margin formalism, the authors retrospectively analyzed nine patients who were previously treated with noncoplanar conformal beams. Cone-beam computed tomography (CBCT) was used in the patient setup. The isocenter shifts between the CBCT and linac were measured for a Varian Trilogy linear accelerator for three months. For each plan, the authors shifted the isocenter of the plan in each direction by ±3 mm simultaneously to simulate the worst setup scenario. Subsequently, the asymptotic behavior of the CTV V{sub 80%} for each patient was studied as the setup error approached the CTV-PTV margin.Results: The authors found that the proper margin for single-fraction frameless SRS cases with brain cancer was about 3 mm for the machine investigated in this study. The isocenter shifts between the CBCT and the linac remained almost constant over a period of three months for this specific machine. This confirmed our assumption that the machine systematic error distribution could be approximated as a delta function. This definition is especially relevant to a single-fraction treatment. The prescribed dose coverage for all the patients investigated was 96.1%± 5.5% with an extreme 3-mm setup error in all three directions simultaneously. It was found that the effect of the setup error on dose coverage was tumor location dependent. It mostly affected the tumors located in the posterior part of the brain, resulting in a minimum coverage of approximately 72%. This was entirely due to the unique geometry of the posterior head.Conclusions: Margin expansion formulas were derived for single-fraction frameless SRS such that the CTV would receive the prescribed dose in 95% of the patients treated for brain cancer. The margins defined in this study are machine-specific and account for nonzero mean systematic error. The margin for single-fraction SRS for a group of machines was also derived in this paper.« less
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
A Noninvasive Body Setup Method for Radiotherapy by Using a Multimodal Image Fusion Technique
Zhang, Jie; Chen, Yunxia; Wang, Chenchen; Chu, Kaiyue; Jin, Jianhua; Huang, Xiaolin; Guan, Yue; Li, Weifeng
2017-01-01
Purpose: To minimize the mismatch error between patient surface and immobilization system for tumor location by a noninvasive patient setup method. Materials and Methods: The method, based on a point set registration, proposes a shift for patient positioning by integrating information of the computed tomography scans and that of optical surface landmarks. An evaluation of the method included 3 areas: (1) a validation on a phantom by estimating 100 known mismatch errors between patient surface and immobilization system. (2) Five patients with pelvic tumors were considered. The tumor location errors of the method were measured using the difference between the proposal shift of cone-beam computed tomography and that of our method. (3) The collected setup data from the evaluation of patients were compared with the published performance data of other 2 similar systems. Results: The phantom verification results showed that the method was capable of estimating mismatch error between patient surface and immobilization system in a precision of <0.22 mm. For the pelvic tumor, the method had an average tumor location error of 1.303, 2.602, and 1.684 mm in left–right, anterior–posterior, and superior–inferior directions, respectively. The performance comparison with other 2 similar systems suggested that the method had a better positioning accuracy for pelvic tumor location. Conclusion: By effectively decreasing an interfraction uncertainty source (mismatch error between patient surface and immobilization system) in radiotherapy, the method can improve patient positioning precision for pelvic tumor. PMID:29333959
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gangsaas, Anne, E-mail: a.gangsaas@erasmusmc.nl; Astreinidou, Eleftheria; Quint, Sandra
2013-10-01
Purpose: To investigate interfraction setup variations of the primary tumor, elective nodes, and vertebrae in laryngeal cancer patients and to validate protocols for cone beam computed tomography (CBCT)-guided correction. Methods and Materials: For 30 patients, CBCT-measured displacements in fractionated treatments were used to investigate population setup errors and to simulate residual setup errors for the no action level (NAL) offline protocol, the extended NAL (eNAL) protocol, and daily CBCT acquisition with online analysis and repositioning. Results: Without corrections, 12 of 26 patients treated with radical radiation therapy would have experienced a gradual change (time trend) in primary tumor setup ≥4more » mm in the craniocaudal (CC) direction during the fractionated treatment (11/12 in caudal direction, maximum 11 mm). Due to these trends, correction of primary tumor displacements with NAL resulted in large residual CC errors (required margin 6.7 mm). With the weekly correction vector adjustments in eNAL, the trends could be largely compensated (CC margin 3.5 mm). Correlation between movements of the primary and nodal clinical target volumes (CTVs) in the CC direction was poor (r{sup 2}=0.15). Therefore, even with online setup corrections of the primary CTV, the required CC margin for the nodal CTV was as large as 6.8 mm. Also for the vertebrae, large time trends were observed for some patients. Because of poor CC correlation (r{sup 2}=0.19) between displacements of the primary CTV and the vertebrae, even with daily online repositioning of the vertebrae, the required CC margin around the primary CTV was 6.9 mm. Conclusions: Laryngeal cancer patients showed substantial interfraction setup variations, including large time trends, and poor CC correlation between primary tumor displacements and motion of the nodes and vertebrae (internal tumor motion). These trends and nonrigid anatomy variations have to be considered in the choice of setup verification protocol and planning target volume margins. eNAL could largely compensate time trends with minor prolongation of fraction time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapanen, Mika; Department of Medical Physics, Tampere University Hospital; Laaksomaa, Marko, E-mail: Marko.Laaksomaa@pshp.fi
2016-04-01
Residual position errors of the lymph node (LN) surrogates and humeral head (HH) were determined for 2 different arm fixation devices in radiotherapy (RT) of breast cancer: a standard wrist-hold (WH) and a house-made rod-hold (RH). The effect of arm position correction (APC) based on setup images was also investigated. A total of 113 consecutive patients with early-stage breast cancer with LN irradiation were retrospectively analyzed (53 and 60 using the WH and RH, respectively). Residual position errors of the LN surrogates (Th1-2 and clavicle) and the HH were investigated to compare the 2 fixation devices. The position errors andmore » setup margins were determined before and after the APC to investigate the efficacy of the APC in the treatment situation. A threshold of 5 mm was used for the residual errors of the clavicle and Th1-2 to perform the APC, and a threshold of 7 mm was used for the HH. The setup margins were calculated with the van Herk formula. Irradiated volumes of the HH were determined from RT treatment plans. With the WH and the RH, setup margins up to 8.1 and 6.7 mm should be used for the LN surrogates, and margins up to 4.6 and 3.6 mm should be used to spare the HH, respectively, without the APC. After the APC, the margins of the LN surrogates were equal to or less than 7.5/6.0 mm with the WH/RH, but margins up to 4.2/2.9 mm were required for the HH. The APC was needed at least once with both the devices for approximately 60% of the patients. With the RH, irradiated volume of the HH was approximately 2 times more than with the WH, without any dose constraints. Use of the RH together with the APC resulted in minimal residual position errors and setup margins for all the investigated bony landmarks. Based on the obtained results, we prefer the house-made RH. However, more attention should be given to minimize the irradiation of the HH with the RH than with the WH.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, A; Foster, J; Chu, W
2015-06-15
Purpose: Many cancer centers treat colorectal patients in the prone position on a belly board to minimize dose to the small bowel. That may potentially Result in patient setup instability with corresponding impact on dose delivery accuracy for highly conformal techniques such as IMRT/VMAT. Two aims of this work are 1) to investigate setup accuracy of rectum patients treated in the prone position on a belly board using CBCT and 2) to evaluate dosimetric impact on bladder and small bowel of treating rectum patients in supine vs. prone position. Methods: For the setup accuracy study, 10 patients were selected. Weeklymore » CBCTs were acquired and matched to bone. The CBCT-determined shifts were recorded. For the dosimetric study, 7 prone-setup patients and 7 supine-setup patients were randomly selected from our clinical database. Various clinically relevant dose volume histogram values were recorded for the small bowel and bladder. Results: The CBCT-determined rotational shifts had a wide variation. For the dataset acquired at the time of this writing, the ranges of rotational setup errors for pitch, roll, and yaw were [−3.6° 4.7°], [−4.3° 3.2°], and [−1.4° 1.4°]. For the dosimetric study: the small bowel V(45Gy) and mean dose for the prone position was 5.6±12.1% and 18.4±6.2Gy (ranges indicate standard deviations); for the supine position the corresponding dose values were 12.9±15.8% and 24.7±8.8Gy. For the bladder, the V(30Gy) and mean dose for prone position were 68.7±12.7% and 38.4±3.3Gy; for supine position these dose values were 77.1±13.7% and 40.7±3.1Gy. Conclusion: There is evidence of significant rotational instability in the prone position. The OAR dosimetry study indicates that there are some patients that may still benefit from the prone position, though many patients can be safely treated supine.« less
Chang, Jenghwa
2017-06-01
To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chong, Irene; Hawkins, Maria; Hansen, Vibeke
2011-11-15
Purpose: There has been no previously published data related to the quantification of rectal motion using cone-beam computed tomography (CBCT) during standard conformal long-course chemoradiotherapy. The purpose of the present study was to quantify the interfractional changes in rectal movement and dimensions and rectal and bladder volume using CBCT and to quantify the bony anatomy displacements to calculate the margins required to account for systematic ({Sigma}) and random ({sigma}) setup errors. Methods and Materials: CBCT images were acquired from 16 patients on the first 3 days of treatment and weekly thereafter. The rectum and bladder were outlined on all CBCTmore » images. The interfraction movement was measured using fixed bony landmarks as references to define the rectal location (upper, mid, and low), The maximal rectal diameter at the three rectal locations was also measured. The bony anatomy displacements were quantified, allowing the calculation of systematic ({Sigma}) and random ({sigma}) setup errors. Results: A total of 123 CBCT data sets were analyzed. Analysis of variance for standard deviation from planning scans showed that rectal anterior and lateral wall movement differed significantly by rectal location. Anterior and lateral rectal wall movements were larger in the mid and upper rectum compared with the low rectum. The posterior rectal wall movement did not change significantly with the rectal location. The rectal diameter changed more in the mid and upper than in the low rectum. No consistent relationship was found between the rectal and bladder volume and time, nor was a significant relationship found between the rectal volume and bladder volume. Conclusions: In the present study, the anterior and lateral rectal movement and rectal diameter were found to change most in the upper rectum, followed by the mid rectum, with the smallest changes seen in the low rectum. Asymmetric margins are warranted to ensure phase 2 coverage.« less
Karlsson, Kristin; Lax, Ingmar; Lindbäck, Elias; Poludniowski, Gavin
2017-09-01
Geometrical uncertainties can result in a delivered dose to the tumor different from that estimated in the static treatment plan. The purpose of this project was to investigate the accuracy of the dose calculated to the clinical target volume (CTV) with the dose-shift approximation, in stereotactic body radiation therapy (SBRT) of lung tumors considering setup errors and breathing motion. The dose-shift method was compared with a beam-shift method with dose recalculation. Included were 10 patients (10 tumors) selected to represent a variety of SBRT-treated lung tumors in terms of tumor location, CTV volume, and tumor density. An in-house developed toolkit within a treatment planning system allowed the shift of either the dose matrix or a shift of the beam isocenter with dose recalculation, to simulate setup errors and breathing motion. Setup shifts of different magnitudes (up to 10 mm) and directions as well as breathing with different peak-to-peak amplitudes (up to 10:5:5 mm) were modeled. The resulting dose-volume histograms (DVHs) were recorded and dose statistics were extracted. Generally, both the dose-shift and beam-shift methods resulted in calculated doses lower than the static planned dose, although the minimum (D 98% ) dose exceeded the prescribed dose in all cases, for setup shifts up to 5 mm. The dose-shift method also generally underestimated the dose compared with the beam-shift method. For clinically realistic systematic displacements of less than 5 mm, the results demonstrated that in the minimum dose region within the CTV, the dose-shift method was accurate to 2% (root-mean-square error). Breathing motion only marginally degraded the dose distributions. Averaged over the patients and shift directions, the dose-shift approximation was determined to be accurate to approximately 2% (RMS) within the CTV, for clinically relevant geometrical uncertainties for SBRT of lung tumors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, N; DiCostanzo, D; Fullenkamp, M
2015-06-15
Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less
A Study of Vicon System Positioning Performance.
Merriaux, Pierre; Dupuis, Yohan; Boutteau, Rémi; Vasseur, Pascal; Savatier, Xavier
2017-07-07
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.
Decoy-state quantum key distribution with biased basis choice
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999
Decoy-state quantum key distribution with biased basis choice.
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.
Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G
2014-11-13
A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.
Zumsteg, Zachary; DeMarco, John; Lee, Steve P; Steinberg, Michael L; Lin, Chun Shu; McBride, William; Lin, Kevin; Wang, Pin-Chieh; Kupelian, Patrick; Lee, Percy
2012-06-01
On-board cone-beam computed tomography (CBCT) is currently available for alignment of patients with head-and-neck cancer before radiotherapy. However, daily CBCT is time intensive and increases the overall radiation dose. We assessed the feasibility of using the average couch shifts from the first several CBCTs to estimate and correct for the presumed systematic setup error. 56 patients with head-and-neck cancer who received daily CBCT before intensity-modulated radiation therapy had recorded shift values in the medial-lateral, superior-inferior, and anterior-posterior dimensions. The average displacements in each direction were calculated for each patient based on the first five or 10 CBCT shifts and were presumed to represent the systematic setup error. The residual error after this correction was determined by subtracting the calculated shifts from the shifts obtained using daily CBCT. The magnitude of the average daily residual three-dimensional (3D) error was 4.8 ± 1.4 mm, 3.9 ± 1.3 mm, and 3.7 ± 1.1 mm for uncorrected, five CBCT corrected, and 10 CBCT corrected protocols, respectively. With no image guidance, 40.8% of fractions would have been >5 mm off target. Using the first five CBCT shifts to correct subsequent fractions, this percentage decreased to 19.0% of all fractions delivered and decreased the percentage of patients with average daily 3D errors >5 mm from 35.7% to 14.3% vs. no image guidance. Using an average of the first 10 CBCT shifts did not significantly improve this outcome. Using the first five CBCT shift measurements as an estimation of the systematic setup error improves daily setup accuracy for a subset of patients with head-and-neck cancer receiving intensity-modulated radiation therapy and primarily benefited those with large 3D correction vectors (>5 mm). Daily CBCT is still necessary until methods are developed that more accurately determine which patients may benefit from alternative imaging strategies. Copyright © 2012 Elsevier Inc. All rights reserved.
SU-F-J-130: Margin Determination for Hypofractionated Partial Breast Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geady, C; Keller, B; Hahn, E
2016-06-15
Purpose: To determine the Planning Target Volume (PTV) margin for Hypofractionated Partial Breast Irradiation (HPBI) using the van Herk formalism (M=2.5∑+0.7σ). HPBI is a novel technique intended to provide local control in breast cancer patients not eligible for surgical resection, using 40 Gy in 5 fractions prescribed to the gross disease. Methods: Setup uncertainties were quantified through retrospective analysis of cone-beam computed tomography (CBCT) data sets, collected prior to (prefraction) and after (postfraction) treatment delivery. During simulation and treatment, patients were immobilized using a wing board and an evacuated bag. Prefraction CBCT was rigidly registered to planning 4-dimensional computed tomographymore » (4DCT) using the chest wall and tumor, and translational couch shifts were applied as needed. This clinical workflow was faithfully reproduced in Pinnacle (Philips Medical Systems) to yield residual setup and intrafractional error through translational shifts and rigid registrations (ribs and sternum) of prefraction CBCT to 4DCT and postfraction CBCT to prefraction CBCT, respectively. All ten patients included in this investigation were medically inoperable; the median age was 84 (range, 52–100) years. Results: Systematic (and random) setup uncertainties (in mm) detected for the left-right, craniocaudal and anteroposterior directions were 0.4 (1.5), 0.8 (1.8) and 0.4 (1.0); net uncertainty was determined to be 0.7 (1.5). Rotations >2° in any axis occurred on 8/72 (11.1%) registrations. Conclusion: Preliminary results suggest a non-uniform setup margin (in mm) of 2.2, 3.3 and 1.7 for the left-right, craniocaudal and anteroposterior directions is required for HPBI, given its immobilization techniques and online setup verification protocol. This investigation is ongoing, though published results from similar studies are consistent with the above findings. Determination of margins in breast radiotherapy is a paradigm shift, but a necessary step in moving towards hypofractionated regiments, which may ultimately redefine the standard of care for this select patient population.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwa, Stefan L.S., E-mail: s.kwa@erasmusmc.nl; Al-Mamgani, Abrahim; Osman, Sarah O.S.
2015-09-01
Purpose: The purpose of this study was to verify clinical target volume–planning target volume (CTV-PTV) margins in single vocal cord irradiation (SVCI) of T1a larynx tumors and characterize inter- and intrafraction target motion. Methods and Materials: For 42 patients, a single vocal cord was irradiated using intensity modulated radiation therapy at a total dose of 58.1 Gy (16 fractions × 3.63 Gy). A daily cone beam computed tomography (CBCT) scan was performed to online correct the setup of the thyroid cartilage after patient positioning with in-room lasers (interfraction motion correction). To monitor intrafraction motion, CBCT scans were also acquired just after patient repositioning and aftermore » dose delivery. A mixed online-offline setup correction protocol (“O2 protocol”) was designed to compensate for both inter- and intrafraction motion. Results: Observed interfraction, systematic (Σ), and random (σ) setup errors in left-right (LR), craniocaudal (CC), and anteroposterior (AP) directions were 0.9, 2.0, and 1.1 mm and 1.0, 1.6, and 1.0 mm, respectively. After correction of these errors, the following intrafraction movements derived from the CBCT acquired after dose delivery were: Σ = 0.4, 1.3, and 0.7 mm, and σ = 0.8, 1.4, and 0.8 mm. More than half of the patients showed a systematic non-zero intrafraction shift in target position, (ie, the mean intrafraction displacement over the treatment fractions was statistically significantly different from zero; P<.05). With the applied CTV-PTV margins (for most patients 3, 5, and 3 mm in LR, CC, and AP directions, respectively), the minimum CTV dose, estimated from the target displacements observed in the last CBCT, was at least 94% of the prescribed dose for all patients and more than 98% for most patients (37 of 42). The proposed O2 protocol could effectively reduce the systematic intrafraction errors observed after dose delivery to almost zero (Σ = 0.1, 0.2, 0.2 mm). Conclusions: With adequate image guidance and CTV-PTV margins in LR, CC, and AP directions of 3, 5, and 3 mm, respectively, excellent target coverage in SVCI could be ensured.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voort, Sebastian van der; Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft; Water, Steven van de
Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of themore » simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can be used in minimax robust optimization of IMPT treatment plans to ensure adequate CTV coverage for oropharyngeal cancer patients.« less
Optical digital to analog conversion performance analysis for indoor set-up conditions
NASA Astrophysics Data System (ADS)
Dobesch, Aleš; Alves, Luis Nero; Wilfert, Otakar; Ribeiro, Carlos Gaspar
2017-10-01
In visible light communication (VLC) the optical digital to analog conversion (ODAC) approach was proposed as a suitable driving technique able to overcome light-emitting diode's (LED) non-linear characteristic. This concept is analogous to an electrical digital-to-analog converter (EDAC). In other words, digital bits are binary weighted to represent an analog signal. The method supports elementary on-off based modulations able to exploit the essence of LED's non-linear characteristic allowing simultaneous lighting and communication. In the ODAC concept the reconstruction error does not simply rely upon the converter bit depth as in case of EDAC. It rather depends on communication system set-up and geometrical relation between emitter and receiver as well. The paper describes simulation results presenting the ODAC's error performance taking into account: the optical channel, the LED's half power angle (HPA) and the receiver field of view (FOV). The set-up under consideration examines indoor conditions for a square room with 4 m length and 3 m height, operating with one dominant wavelength (blue) and having walls with a reflection coefficient of 0.8. The achieved results reveal that reconstruction error increases for higher data rates as a result of interference due to multipath propagation.
Patient motion tracking in the presence of measurement errors.
Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter
2009-01-01
The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.
Automatic image registration performance for two different CBCT systems; variation with imaging dose
NASA Astrophysics Data System (ADS)
Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.
2014-03-01
The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.
Hyde, Derek; Lochray, Fiona; Korol, Renee; Davidson, Melanie; Wong, C Shun; Ma, Lijun; Sahgal, Arjun
2012-03-01
To evaluate the residual setup error and intrafraction motion following kilovoltage cone-beam CT (CBCT) image guidance, for immobilized spine stereotactic body radiotherapy (SBRT) patients, with positioning corrected for in all six degrees of freedom. Analysis is based on 42 consecutive patients (48 thoracic and/or lumbar metastases) treated with a total of 106 fractions and 307 image registrations. Following initial setup, a CBCT was acquired for patient alignment and a pretreatment CBCT taken to verify shifts and determine the residual setup error, followed by a midtreatment and posttreatment CBCT image. For 13 single-fraction SBRT patients, two midtreatment CBCT images were obtained. Initially, a 1.5-mm and 1° tolerance was used to reposition the patient following couch shifts which was subsequently reduced to 1 mm and 1° degree after the first 10 patients. Small positioning errors after the initial CBCT setup were observed, with 90% occurring within 1 mm and 97% within 1°. In analyzing the impact of the time interval for verification imaging (10 ± 3 min) and subsequent image acquisitions (17 ± 4 min), the residual setup error was not significantly different (p > 0.05). A significant difference (p = 0.04) in the average three-dimensional intrafraction positional deviations favoring a more strict tolerance in translation (1 mm vs. 1.5 mm) was observed. The absolute intrafraction motion averaged over all patients and all directions along x, y, and z axis (± SD) were 0.7 ± 0.5 mm and 0.5 ± 0.4 mm for the 1.5 mm and 1 mm tolerance, respectively. Based on a 1-mm and 1° correction threshold, the target was localized to within 1.2 mm and 0.9° with 95% confidence. Near-rigid body immobilization, intrafraction CBCT imaging approximately every 15-20 min, and strict repositioning thresholds in six degrees of freedom yields minimal intrafraction motion allowing for safe spine SBRT delivery. Copyright © 2012 Elsevier Inc. All rights reserved.
Optimized linear motor and digital PID controller setup used in Mössbauer spectrometer
NASA Astrophysics Data System (ADS)
Kohout, Pavel; Kouřil, Lukáš; Navařík, Jakub; Novák, Petr; Pechoušek, Jiří
2014-10-01
Optimization of a linear motor and digital PID controller setup used in a Mössbauer spectrometer is presented. Velocity driving system with a digital PID feedback subsystem was developed in the LabVIEW graphical environment and deployed on the sbRIO real-time hardware device (National Instruments). The most important data acquisition processes are performed as real-time deterministic tasks on an FPGA chip. Velocity transducer of a double loudspeaker type with a power amplifier circuit is driven by the system. Series of calibration measurements were proceeded to find the optimal setup of the P, I, D parameters together with velocity error signal analysis. The shape and given signal characteristics of the velocity error signal are analyzed in details. Remote applications for controlling and monitoring the PID system from computer or smart phone, respectively, were also developed. The best setup and P, I, D parameters were set and calibration spectrum of α-Fe sample with an average nonlinearity of the velocity scale below 0.08% was collected. Furthermore, the width of the spectral line below 0.30 mm/s was observed. Powerful and complex velocity driving system was designed.
Al-Hammadi, Noora; Caparrotti, Palmira; Naim, Carole; Hayes, Jillian; Rebecca Benson, Katherine; Vasic, Ana; Al-Abdulla, Hissa; Hammoud, Rabih; Divakar, Saju; Petric, Primoz
2018-03-01
During radiotherapy of left-sided breast cancer, parts of the heart are irradiated, which may lead to late toxicity. We report on the experience of single institution with cardiac-sparing radiotherapy using voluntary deep inspiration breath hold (V-DIBH) and compare its dosimetric outcome with free breathing (FB) technique. Left-sided breast cancer patients, treated at our department with postoperative radiotherapy of breast/chest wall +/- regional lymph nodes between May 2015 and January 2017, were considered for inclusion. FB-computed tomography (CT) was obtained and dose-planning performed. Cases with cardiac V25Gy ≥ 5% or risk factors for heart disease were coached for V-DIBH. Compliant patients were included. They underwent additional CT in V-DIBH for planning, followed by V-DIBH radiotherapy. Dose volume histogram parameters for heart, lung and optimized planning target volume (OPTV) were compared between FB and BH. Treatment setup shifts and systematic and random errors for V-DIBH technique were compared with FB historic control. Sixty-three patients were considered for V-DIBH. Nine (14.3%) were non-compliant at coaching, leaving 54 cases for analysis. When compared with FB, V-DIBH resulted in a significant reduction of mean cardiac dose from 6.1 +/- 2.5 to 3.2 +/- 1.4 Gy (p < 0.001), maximum cardiac dose from 51.1 +/- 1.4 to 48.5 +/- 6.8 Gy (p = 0.005) and cardiac V25Gy from 8.5 +/- 4.2 to 3.2 +/- 2.5% (p < 0.001). Heart volumes receiving low (10-20 Gy) and high (30-50 Gy) doses were also significantly reduced. Mean dose to the left anterior coronary artery was 23.0 (+/- 6.7) Gy and 14.8 (+/- 7.6) Gy on FB and V-DIBH, respectively (p < 0.001). Differences between FB- and V-DIBH-derived mean lung dose (11.3 +/- 3.2 vs. 10.6 +/- 2.6 Gy), lung V20Gy (20.5 +/- 7 vs. 19.5 +/- 5.1 Gy) and V95% for the OPTV (95.6 +/- 4.1 vs. 95.2 +/- 6.3%) were non-significant. V-DIBH-derived mean shifts for initial patient setup were ≤ 2.7 mm. Random and systematic errors were ≤ 2.1 mm. These results did not differ significantly from historic FB controls. When compared with FB, V-DIBH demonstrated high setup accuracy and enabled significant reduction of cardiac doses without compromising the target volume coverage. Differences in lung doses were non-significant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carson, M; Molineu, A; Taylor, P
Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
Current-limiting and ultrafast system for the characterization of resistive random access memories.
Diaz-Fortuny, J; Maestro, M; Martin-Martinez, J; Crespo-Yepes, A; Rodriguez, R; Nafria, M; Aymerich, X
2016-06-01
A new system for the ultrafast characterization of resistive switching phenomenon is developed to acquire the current during the Set and Reset process in a microsecond time scale. A new electronic circuit has been developed as a part of the main setup system, which is capable of (i) applying a hardware current limit ranging from nanoampers up to miliampers and (ii) converting the Set and Reset exponential gate current range into an equivalent linear voltage. The complete system setup allows measuring with a microsecond resolution. Some examples demonstrate that, with the developed setup, an in-depth analysis of resistive switching phenomenon and random telegraph noise can be made.
Single-ping ADCP measurements in the Strait of Gibraltar
NASA Astrophysics Data System (ADS)
Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo
2016-04-01
In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.
Dinges, Eric; Felderman, Nicole; McGuire, Sarah; Gross, Brandie; Bhatia, Sudershan; Mott, Sarah; Buatti, John; Wang, Dongxu
2015-01-01
Background and Purpose This study evaluates the potential efficacy and robustness of functional bone marrow sparing (BMS) using intensity-modulated proton therapy (IMPT) for cervical cancer, with the goal of reducing hematologic toxicity. Material and Methods IMPT plans with prescription dose of 45 Gy were generated for ten patients who have received BMS intensity-modulated x-ray therapy (IMRT). Functional bone marrow was identified by 18F-flourothymidine positron emission tomography. IMPT plans were designed to minimize the volume of functional bone marrow receiving 5–40 Gy while maintaining similar target coverage and healthy organ sparing as IMRT. IMPT robustness was analyzed with ±3% range uncertainty errors and/or ±3mm translational setup errors in all three principal dimensions. Results In the static scenario, the median dose volume reductions for functional bone marrow by IMPT were: 32% for V5GY, 47% for V10Gy, 54% for V20Gy, and 57% for V40Gy, all with p<0.01 compared to IMRT. With assumed errors, even the worst-case reductions by IMPT were: 23% for V5Gy, 37% for V10Gy, 41% for V20Gy, and 39% for V40Gy, all with p<0.01. Conclusions The potential sparing of functional bone marrow by IMPT for cervical cancer is significant and robust under realistic systematic range uncertainties and clinically relevant setup errors. PMID:25981130
Inoue, Tatsuya; Widder, Joachim; van Dijk, Lisanne V; Takegawa, Hideki; Koizumi, Masahiko; Takashina, Masaaki; Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru; Saito, Anneyuko I; Sasai, Keisuke; Van't Veld, Aart A; Langendijk, Johannes A; Korevaar, Erik W
2016-11-01
To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2. The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D2 - D98, where D2 and D98 are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to <98% (clinical threshold) in 3 of 10 patients for robust 5-mm evaluations. However, the TC remained >98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. In robustly optimized IMPT for stage III NSCLC, the setup and range uncertainties, breathing motion, and interplay effects have limited impact on target coverage, dose homogeneity, and organ-at-risk dose parameters. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang
2013-11-01
Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.
2016-11-01
Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The most popular are coherent techniques such as double random phase encoding. Its main advantage is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. Recently, encryption of digital information in form of binary images has become quite popular. Advantages of using quick response (QR) code in capacity of data container for optical encryption include: 1) any data represented as QR code will have close to white (excluding zero spatial frequency) Fourier spectrum which have good overlapping with encryption key spectrum; 2) built-in algorithm for image scale and orientation correction which simplifies decoding of decrypted QR codes; 3) embedded error correction code allows for successful decryption of information even in case of partial corruption of decrypted image. Optical encryption of digital data in form QR codes using spatially incoherent illumination was experimentally implemented. Two liquid crystal spatial light modulators were used in experimental setup for QR code and encrypting kinoform imaging respectively. Decryption was conducted digitally. Successful decryption of encrypted QR codes is demonstrated.
TH-EF-BRB-11: Volumetric Modulated Arc Therapy for Total Body Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Folkerts, M; Hrycushko, B
Purpose: To develop a modern, patient-comfortable total body irradiation (TBI) technique suitable for standard-sized linac vaults. Methods: An indexed rotatable immobilization system (IRIS) was developed to make possible total-body CT imaging and radiation delivery on conventional couches. Treatment consists of multi-isocentric volumetric modulated arc therapy (VMAT) to the upper body and parallel-opposed fields to the lower body. Each isocenter is indexed to the couch and includes a 180° IRIS rotation between the upper and lower body fields. VMAT fields are optimized to satisfy lung dose objectives while achieving a uniform therapeutic dose to the torso. End-to-end tests with a randomore » phantom were used to verify dosimetric characteristics. Treatment plan robustness regarding setup uncertainty was assessed by simulating global and regional isocenter setup shifts on patient data sets. Dosimetric comparisons were made with conventional extended distance, standing TBI (cTBI) plans using a Monte Carlo-based calculation. Treatment efficiency was assessed for eight courses of patient treatment. Results: The IRIS system is level and orthogonal to the scanned CT image plane, with lateral shifts <2mm following rotation. End-to-end tests showed surface doses within ±10% of the prescription dose, field junction doses within ±15% of prescription dose. Plan robustness tests showed <15% changes in dose with global setup errors up to 5mm in each direction. Local 5mm relative setup errors in the chest resulted in < 5% dose changes. Local 5mm shift errors in the pelvic and upper leg junction resulted in <10% dose changes while a 10mm shift error causes dose changes up to 25%. Dosimetric comparison with cTBI showed VMAT-TBI has advantages in preserving chest wall dose with flexibility in leveraging the PTV-body and PTV-lung dose. Conclusion: VMAT-TBI with the IRIS system was shown clinically feasible as a cost-effective approach to TBI for standard-sized linac vaults.« less
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.
Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
Practical considerations for coil-wrapped Distributed Temperature Sensing setups
NASA Astrophysics Data System (ADS)
Solcerova, Anna; van Emmerik, Tim; Hilgersom, Koen; van de Giesen, Nick
2015-04-01
Fiber-optic Distributed Temperature Sensing (DTS) has been applied widely in hydrological and meteorological systems. For example, DTS has been used to measure streamflow, groundwater, soil moisture and temperature, air temperature, and lake energy fluxes. Many of these applications require a spatial monitoring resolution smaller than the minimum resolution of the DTS device. Therefore, measuring with these resolutions requires a custom made setup. To obtain both high temporal and high spatial resolution temperature measurements, fiber-optic cable is often wrapped around, and glued to, a coil, for example a PVC conduit. For these setups, it is often assumed that the construction characteristics (e.g., the coil material, shape, diameter) do not influence the DTS temperature measurements significantly. This study compares DTS datasets obtained during four measurement campaigns. The datasets were acquired using different setups, allowing to investigate the influence of the construction characteristics on the monitoring results. This comparative study suggests that the construction material, shape, diameter, and way of attachment can have a significant influence on the results. We present a qualitative and quantitative approximation of errors introduced through the selection of the construction, e.g., choice of coil material, influence of solar radiation, coil diameter, and cable attachment method. Our aim is to provide insight in factors that influence DTS measurements, which designers of future DTS measurements setups can take into account. Moreover, we present a number of solutions to minimize these errors for improved temperature retrieval using DTS.
Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.
Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S
2008-10-01
Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage.
Amiri, Shahram; Wilson, David R; Masri, Bassam A; Sharma, Gulshan; Anglin, Carolyn
2011-06-03
Determining the 3D pose of the patella after total knee arthroplasty is challenging. The commonly used single-plane fluoroscopy is prone to large errors in the clinically relevant mediolateral direction. A conventional fixed bi-planar setup is limited in the minimum angular distance between the imaging planes necessary for visualizing the patellar component, and requires a highly flexible setup to adjust for the subject-specific geometries. As an alternative solution, this study investigated the use of a novel multi-planar imaging setup that consists of a C-arm tracked by an external optoelectric tracking system, to acquire calibrated radiographs from multiple orientations. To determine the accuracies, a knee prosthesis was implanted on artificial bones and imaged in simulated 'Supine' and 'Weightbearing' configurations. The results were compared with measures from a coordinate measuring machine as the ground-truth reference. The weightbearing configuration was the preferred imaging direction with RMS errors of 0.48 mm and 1.32 ° for mediolateral shift and tilt of the patella, respectively, the two most clinically relevant measures. The 'imaging accuracies' of the system, defined as the accuracies in 3D reconstruction of a cylindrical ball bearing phantom (so as to avoid the influence of the shape and orientation of the imaging object), showed an order of magnitude (11.5 times) reduction in the out-of-plane RMS errors in comparison to single-plane fluoroscopy. With this new method, complete 3D pose of the patellofemoral and tibiofemoral joints during quasi-static activities can be determined with a many-fold (up to 8 times) (3.4mm) improvement in the out-of-plane accuracies compared to a conventional single-plane fluoroscopy setup. Copyright © 2011 Elsevier Ltd. All rights reserved.
Quantum State Transfer via Noisy Photonic and Phononic Waveguides
NASA Astrophysics Data System (ADS)
Vermersch, B.; Guimond, P.-O.; Pichler, H.; Zoller, P.
2017-03-01
We describe a quantum state transfer protocol, where a quantum state of photons stored in a first cavity can be faithfully transferred to a second distant cavity via an infinite 1D waveguide, while being immune to arbitrary noise (e.g., thermal noise) injected into the waveguide. We extend the model and protocol to a cavity QED setup, where atomic ensembles, or single atoms representing quantum memory, are coupled to a cavity mode. We present a detailed study of sensitivity to imperfections, and apply a quantum error correction protocol to account for random losses (or additions) of photons in the waveguide. Our numerical analysis is enabled by matrix product state techniques to simulate the complete quantum circuit, which we generalize to include thermal input fields. Our discussion applies both to photonic and phononic quantum networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowdell, S; Grassberger, C; Paganetti, H
2014-06-01
Purpose: Evaluate the sensitivity of intensity-modulated proton therapy (IMPT) lung treatments to systematic and random setup uncertainties combined with motion effects. Methods: Treatment plans with single-field homogeneity restricted to ±20% (IMPT-20%) were compared to plans with no restriction (IMPT-full). 4D Monte Carlo simulations were performed for 10 lung patients using the patient CT geometry with either ±5mm systematic or random setup uncertainties applied over a 35 × 2.5Gy(RBE) fractionated treatment course. Intra-fraction, inter-field and inter-fraction motions were investigated. 50 fractionated treatments with systematic or random setup uncertainties applied to each fraction were generated for both IMPT delivery methods and threemore » energy-dependent spot sizes (big spots - BS σ=18-9mm, intermediate spots - IS σ=11-5mm, small spots - SS σ=4-2mm). These results were compared to a Monte Carlo recalculation of the original treatment plan, with results presented as the difference in EUD (ΔEUD), V{sub 95} (ΔV{sub 95}) and target homogeneity (ΔD{sub 1}–D{sub 99}) between the 4D simulations and the Monte Carlo calculation on the planning CT. Results: The standard deviations in the ΔEUD were 1.95±0.47(BS), 1.85±0.66(IS) and 1.31±0.35(SS) times higher in IMPT-full compared to IMPT-20% when ±5mm systematic setup uncertainties were applied. The ΔV{sub 95} variations were also 1.53±0.26(BS), 1.60±0.50(IS) and 1.38±0.38(SS) times higher for IMPT-full. For random setup uncertainties, the standard deviations of the ΔEUD from 50 simulated fractionated treatments were 1.94±0.90(BS), 2.13±1.08(IS) and 1.45±0.57(SS) times higher in IMPTfull compared to IMPT-20%. For all spot sizes considered, the ΔD{sub 1}-D{sub 99} coincided within the uncertainty limits for the two IMPT delivery methods, with the mean value always higher for IMPT-full. Statistical analysis showed significant differences between the IMPT-full and IMPT-20% dose distributions for the majority of scenarios studied. Conclusion: Lung IMPT-full treatments are more sensitive to both systematic and random setup uncertainties compared to IMPT-20%. This work was supported by the NIH R01 CA111590.« less
NASA Astrophysics Data System (ADS)
Baek, Jong Geun; Jang, Hyun Soo; Oh, Young Kee; Lee, Hyun Jeong; Kim, Eng Chan
2015-07-01
The purpose of this study was to evaluate the setup uncertainties for single-fraction stereotactic radiosurgery (SF-SRS) based on clinical data with two different mask-creation methods using pretreatment con-beam computed tomography imaging guidance. Dedicated frameless fixation Brain- LAB masks for 23 patients were created as a routine mask (R-mask) making method, as explained in the BrainLAB's user manual. Alternative masks (A-masks), which were created by modifying the cover range of the R-masks for the patient's head, were used for 23 patients. The systematic errors including these for each mask and stereotactic target localizer were analyzed, and the errors were calculated as the means ± standard deviations (SD) from the left-right (LR), superior-inferior (SI), anterior-posterior (AP), and yaw setup corrections. In addition, the frequencies of the threedimensional (3D) vector length were analyzed. The values of the mean setup corrections for the R-mask in all directions were < 0.7 mm and < 0.1°, whereas the magnitudes of the SDs were relatively large compared to the mean values. In contrast, the means and SDs of the A-mask were smaller than those for the R-mask with the exception of the SD in the AP direction. The means and SDs in the yaw rotational direction for the R-mask and the A-mask system were comparable. 3D vector shifts of larger magnitude occurred more frequently for the R-mask than the A-mask. The setup uncertainties for each mask with the stereotactic localizing system had an asymmetric offset towards the positive AP direction. The A-mask-creation method, which is capable of covering the top of the patient's head, is superior to that for the R-mask, so the use of the A-mask is encouraged for SF-SRS to reduce the setup uncertainties. Moreover, careful mask-making is required to prevent possible setup uncertainties.
Shibayama, Yusuke; Arimura, Hidetaka; Hirose, Taka-Aki; Nakamoto, Takahiro; Sasaki, Tomonari; Ohga, Saiji; Matsushita, Norimasa; Umezu, Yoshiyuki; Nakamura, Yasuhiko; Honda, Hiroshi
2017-05-01
The setup errors and organ motion errors pertaining to clinical target volume (CTV) have been considered as two major causes of uncertainties in the determination of the CTV-to-planning target volume (PTV) margins for prostate cancer radiation treatment planning. We based our study on the assumption that interfractional target shape variations are not negligible as another source of uncertainty for the determination of precise CTV-to-PTV margins. Thus, we investigated the interfractional shape variations of CTVs based on a point distribution model (PDM) for prostate cancer radiation therapy. To quantitate the shape variations of CTVs, the PDM was applied for the contours of 4 types of CTV regions (low-risk, intermediate- risk, high-risk CTVs, and prostate plus entire seminal vesicles), which were delineated by considering prostate cancer risk groups on planning computed tomography (CT) and cone beam CT (CBCT) images of 73 fractions of 10 patients. The standard deviations (SDs) of the interfractional random errors for shape variations were obtained from covariance matrices based on the PDMs, which were generated from vertices of triangulated CTV surfaces. The correspondences between CTV surface vertices were determined based on a thin-plate spline robust point matching algorithm. The systematic error for shape variations was defined as the average deviation between surfaces of an average CTV and planning CTVs, and the random error as the average deviation of CTV surface vertices for fractions from an average CTV surface. The means of the SDs of the systematic errors for the four types of CTVs ranged from 1.0 to 2.0 mm along the anterior direction, 1.2 to 2.6 mm along the posterior direction, 1.0 to 2.5 mm along the superior direction, 0.9 to 1.9 mm along the inferior direction, 0.9 to 2.6 mm along the right direction, and 1.0 to 3.0 mm along the left direction. Concerning the random errors, the means of the SDs ranged from 0.9 to 1.2 mm along the anterior direction, 1.0 to 1.4 mm along the posterior direction, 0.9 to 1.3 mm along the superior direction, 0.8 to 1.0 mm along the inferior direction, 0.8 to 0.9 mm along the right direction, and 0.8 to 1.0 mm along the left direction. Since the shape variations were not negligible for intermediate and high-risk CTVs, they should be taken into account for the determination of the CTV-to-PTV margins in radiation treatment planning of prostate cancer. © 2017 American Association of Physicists in Medicine.
Genetic evolutionary taboo search for optimal marker placement in infrared patient setup
NASA Astrophysics Data System (ADS)
Riboldi, M.; Baroni, G.; Spadea, M. F.; Tagaste, B.; Garibaldi, C.; Cambria, R.; Orecchia, R.; Pedotti, A.
2007-09-01
In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
NASA Astrophysics Data System (ADS)
Saga, R. S.; Jauhari, W. A.; Laksono, P. W.
2017-11-01
This paper presents an integrated inventory model which consists of single vendor and buyer. The buyer managed its inventory periodically and orders products from the vendor to satisfy the end customer’s demand, where the annual demand and the ordering cost were in the fuzzy environment. The buyer used a service level constraint instead of the stock-out cost term, so that the stock-out level per cycle was bounded. Then, the vendor produced and delivered products to the buyer. The vendor had a choice to commit an investment to reduce the setup cost. However, the vendor’s production process was imperfect, thus the lot delivered contained some defective products. Moreover, the buyer’s inspection process was not error-free since the inspector could be mistaken in categorizing the product’s quality. The objective was to find the optimum value for the review period, the setup cost, and the number of deliveries in one production cycle which might minimize the joint total cost. Furthermore, the algorithm and numerical example were provided to illustrate the application of the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, S; Charpentier, P; Sayler, E
2015-06-15
Purpose Isocenter shifts and rotations to correct patient setup errors and organ motion cannot remedy some shape changes of large targets. We are investigating new methods in quantification of target deformation for realtime IGRT of breast and chest wall cancer. Methods Ninety-five patients of breast or chest wall cancer were accrued in an IRB-approved clinical trial of IGRT using 3D surface images acquired at daily setup and beam-on time via an in-room camera. Shifts and rotations relating to the planned reference surface were determined using iterative-closest-point alignment. Local surface displacements and target deformation are measured via a ray-surface intersection andmore » principal component analysis (PCA) of external surface, respectively. Isocenter shift, upper-abdominal displacement, and vectors of the surface projected onto the two principal components, PC1 and PC2, were evaluated for sensitivity and accuracy in detection of target deformation. Setup errors for some deformed targets were estimated by superlatively registering target volume, inner surface, or external surface in weekly CBCT or these outlines on weekly EPI. Results Setup difference according to the inner-surface, external surface, or target volume could be 1.5 cm. Video surface-guided setup agreed with EPI results to within < 0.5 cm while CBCT results were sometimes (∼20%) different from that of EPI (>0.5 cm) due to target deformation for some large breasts and some chest walls undergoing deep-breath-hold irradiation. Square root of PC1 and PC2 is very sensitive to external surface deformation and irregular breathing. Conclusion PCA of external surfaces is quick and simple way to detect target deformation in IGRT of breast and chest wall cancer. Setup corrections based on the target volume, inner surface, and external surface could be significant different. Thus, checking of target shape changes is essential for accurate image-guided patient setup and motion tracking of large deformable targets. NIH grant for the first author as cionsultant and the last author as the PI.« less
NASA Astrophysics Data System (ADS)
Ammazzalorso, F.; Bednarz, T.; Jelen, U.
2014-03-01
We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inoue, Tatsuya; Widder, Joachim; Dijk, Lisanne V. van
2016-11-01
Purpose: To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Methods and Materials: Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2.more » The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D{sub 2} − D{sub 98}, where D{sub 2} and D{sub 98} are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. Results: The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to <98% (clinical threshold) in 3 of 10 patients for robust 5-mm evaluations. However, the TC remained >98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. Conclusions: In robustly optimized IMPT for stage III NSCLC, the setup and range uncertainties, breathing motion, and interplay effects have limited impact on target coverage, dose homogeneity, and organ-at-risk dose parameters.« less
TH-A-9A-03: Dosimetric Effect of Rotational Errors for Lung Stereotactic Body Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J; Kim, H; Park, J
2014-06-15
Purpose: To evaluate the dosimetric effects on target volume and organs at risk (OARs) due to roll rotational errors in treatment setup of stereotactic body radiation therapy (SBRT) for lung cancer. Methods: There were a total of 23 volumetric modulated arc therapy (VMAT) plans for lung SBRT examined in this retrospective study. Each CT image of VMAT plans was intentionally rotated by ±1°, ±2°, and ±3° to simulate roll rotational setup errors. The axis of rotation was set at the center of T-spine. The target volume and OARs in the rotated CT images were re-defined by deformable registration of originalmore » contours. The dose distributions on each set of rotated images were re-calculated to cover the planning target volume (PTV) with the prescription dose before and after the couch translational correction. The dose-volumetric changes of PTVs and spinal cords were analyzed. Results: The differences in D95% of PTVs by −3°, −2°, −1°, 1°, 2°, and 3° roll rotations before the couch translational correction were on average −11.3±11.4%, −5.46±7.24%, −1.11±1.38% −3.34±3.97%, −9.64±10.3%, and −16.3±14.7%, respectively. After the couch translational correction, those values were −0.195±0.544%, −0.159±0.391%, −0.188±0.262%, −0.310±0.270%, −0.407±0.331%, and −0.433±0.401%, respectively. The maximum dose difference of spinal cord among the 23 plans even after the couch translational correction was 25.9% at −3° rotation. Conclusions: Roll rotational setup errors in lung SBRT significantly influenced the coverage of target volume using VMAT technique. This could be in part compensated by the translational couch correction. However, in spite of the translational correction, the delivered doses to the spinal cord could be more than the calculated doses. Therefore if rotational setup errors exist during lung SBRT using VMAT technique, the rotational correction would rather be considered to prevent over-irradiation of normal tissues than the translational correction.« less
Cryptographic salting for security enhancement of double random phase encryption schemes
NASA Astrophysics Data System (ADS)
Velez Zea, Alejandro; Fredy Barrera, John; Torroba, Roberto
2017-10-01
Security in optical encryption techniques is a subject of great importance, especially in light of recent reports of successful attacks. We propose a new procedure to reinforce the ciphertexts generated in double random phase encrypting experimental setups. This ciphertext is protected by multiplexing with a ‘salt’ ciphertext coded with the same setup. We present an experimental implementation of the ‘salting’ technique. Thereafter, we analyze the resistance of the ‘salted’ ciphertext under some of the commonly known attacks reported in the literature, demonstrating the validity of our proposal.
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
Lei, Yu; Wu, Qiuwen
2010-04-21
Offline adaptive radiotherapy (ART) has been used to effectively correct and compensate for prostate motion and reduce the required margin. The efficacy depends on the characteristics of the patient setup error and interfraction motion through the whole treatment; specifically, systematic errors are corrected and random errors are compensated for through the margins. In online image-guided radiation therapy (IGRT) of prostate cancer, the translational setup error and inter-fractional prostate motion are corrected through pre-treatment imaging and couch correction at each fraction. However, the rotation and deformation of the target are not corrected and only accounted for with margins in treatment planning. The purpose of this study was to investigate whether the offline ART strategy is necessary for an online IGRT protocol and to evaluate the benefit of the hybrid strategy. First, to investigate the rationale of the hybrid strategy, 592 cone-beam-computed tomography (CBCT) images taken before and after each fraction for an online IGRT protocol from 16 patients were analyzed. Specifically, the characteristics of prostate rotation were analyzed. It was found that there exist systematic inter-fractional prostate rotations, and they are patient specific. These rotations, if not corrected, are persistent through the treatment fraction, and rotations detected in early fractions are representative of those in later fractions. These findings suggest that the offline adaptive replanning strategy is beneficial to the online IGRT protocol with further margin reductions. Second, to quantitatively evaluate the benefit of the hybrid strategy, 412 repeated helical CT scans from 25 patients during the course of treatment were included in the replanning study. Both low-risk patients (LRP, clinical target volume, CTV = prostate) and intermediate-risk patients (IRP, CTV = prostate + seminal vesicles) were included in the simulation. The contours of prostate and seminal vesicles were delineated on each CT. The benefit of margin reduction to compensate for both rotation and deformation in the hybrid strategy was evaluated geometrically. With the hybrid strategy, the planning margins can be reduced by 1.4 mm for LRP, and 2.0 mm for IRP, compared with the standard online IGRT only, to maintain the same 99% target volume coverage. The average relative reduction in planning target volume (PTV) based on the internal target volume (ITV) from PTV based on CTV is 19% for LRP, and 27% for IRP.
ERIC Educational Resources Information Center
Taylor, David P.
1995-01-01
Presents an experiment that demonstrates conservation of momentum and energy using a box on the ground moving backwards as it is struck by a projectile. Discusses lab calculations, setup, management, errors, and improvements. (JRH)
Comparison of 2c- and 3cLIF droplet temperature imaging
NASA Astrophysics Data System (ADS)
Palmer, Johannes; Reddemann, Manuel A.; Kirsch, Valeri; Kneer, Reinhold
2018-06-01
This work presents "pulsed 2D-3cLIF-EET" as a measurement setup for micro-droplet internal temperature imaging. The setup relies on a third color channel that allows correcting spatially changing energy transfer rates between the two applied fluorescent dyes. First measurement results are compared with results of two slightly different versions of the recent "pulsed 2D-2cLIF-EET" method. Results reveal a higher temperature measurement accuracy of the recent 2cLIF setup. Average droplet temperature is determined by the 2cLIF setup with an uncertainty of less than 1 K and a spatial deviation of about 3.7 K. The new 3cLIF approach would become competitive, if the existing droplet size dependency is anticipated by an additional calibration and if the processing algorithm includes spatial measurement errors more appropriately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyapa, Robert; Lowe, Matthew; Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester
Purpose: To evaluate the robustness of head and neck plans for treatment with intensity modulated proton therapy to range and setup errors, and to establish robustness parameters for the planning of future head and neck treatments. Methods and Materials: Ten patients previously treated were evaluated in terms of robustness to range and setup errors. Error bar dose distributions were generated for each plan, from which several metrics were extracted and used to define a robustness database of acceptable parameters over all analyzed plans. The patients were treated in sequentially delivered series, and plans were evaluated for both the first seriesmore » and for the combined error over the whole treatment. To demonstrate the application of such a database in the head and neck, for 1 patient, an alternative treatment plan was generated using a simultaneous integrated boost (SIB) approach and plans of differing numbers of fields. Results: The robustness database for the treatment of head and neck patients is presented. In an example case, comparison of single and multiple field plans against the database show clear improvements in robustness by using multiple fields. A comparison of sequentially delivered series and an SIB approach for this patient show both to be of comparable robustness, although the SIB approach shows a slightly greater sensitivity to uncertainties. Conclusions: A robustness database was created for the treatment of head and neck patients with intensity modulated proton therapy based on previous clinical experience. This will allow the identification of future plans that may benefit from alternative planning approaches to improve robustness.« less
Inter- and Intrafraction Uncertainty in Prostate Bed Image-Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kitty; Palma, David A.; Department of Oncology, University of Western Ontario, London
2012-10-01
Purpose: The goals of this study were to measure inter- and intrafraction setup error and prostate bed motion (PBM) in patients undergoing post-prostatectomy image-guided radiotherapy (IGRT) and to propose appropriate population-based three-dimensional clinical target volume to planning target volume (CTV-PTV) margins in both non-IGRT and IGRT scenarios. Methods and Materials: In this prospective study, 14 patients underwent adjuvant or salvage radiotherapy to the prostate bed under image guidance using linac-based kilovoltage cone-beam CT (kV-CBCT). Inter- and intrafraction uncertainty/motion was assessed by offline analysis of three consecutive daily kV-CBCT images of each patient: (1) after initial setup to skin marks, (2)more » after correction for positional error/immediately before radiation treatment, and (3) immediately after treatment. Results: The magnitude of interfraction PBM was 2.1 mm, and intrafraction PBM was 0.4 mm. The maximum inter- and intrafraction prostate bed motion was primarily in the anterior-posterior direction. Margins of at least 3-5 mm with IGRT and 4-7 mm without IGRT (aligning to skin marks) will ensure 95% of the prescribed dose to the clinical target volume in 90% of patients. Conclusions: PBM is a predominant source of intrafraction error compared with setup error and has implications for appropriate PTV margins. Based on inter- and estimated intrafraction motion of the prostate bed using pre- and post-kV-CBCT images, CBCT IGRT to correct for day-to-day variances can potentially reduce CTV-PTV margins by 1-2 mm. CTV-PTV margins for prostate bed treatment in the IGRT and non-IGRT scenarios are proposed; however, in cases with more uncertainty of target delineation and image guidance accuracy, larger margins are recommended.« less
Giske, Kristina; Stoiber, Eva M; Schwarz, Michael; Stoll, Armin; Muenter, Marc W; Timke, Carmen; Roeder, Falk; Debus, Juergen; Huber, Peter E; Thieke, Christian; Bendl, Rolf
2011-06-01
To evaluate the local positioning uncertainties during fractionated radiotherapy of head-and-neck cancer patients immobilized using a custom-made fixation device and discuss the effect of possible patient correction strategies for these uncertainties. A total of 45 head-and-neck patients underwent regular control computed tomography scanning using an in-room computed tomography scanner. The local and global positioning variations of all patients were evaluated by applying a rigid registration algorithm. One bounding box around the complete target volume and nine local registration boxes containing relevant anatomic structures were introduced. The resulting uncertainties for a stereotactic setup and the deformations referenced to one anatomic local registration box were determined. Local deformations of the patients immobilized using our custom-made device were compared with previously published results. Several patient positioning correction strategies were simulated, and the residual local uncertainties were calculated. The patient anatomy in the stereotactic setup showed local systematic positioning deviations of 1-4 mm. The deformations referenced to a particular anatomic local registration box were similar to the reported deformations assessed from patients immobilized with commercially available Aquaplast masks. A global correction, including the rotational error compensation, decreased the remaining local translational errors. Depending on the chosen patient positioning strategy, the remaining local uncertainties varied considerably. Local deformations in head-and-neck patients occur even if an elaborate, custom-made patient fixation method is used. A rotational error correction decreased the required margins considerably. None of the considered correction strategies achieved perfect alignment. Therefore, weighting of anatomic subregions to obtain the optimal correction vector should be investigated in the future. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Jin, H; Ali, I
2014-06-01
Purpose: To determine dosimetric impact of positioning errors in the stereotactic hypo-fractionated treatment of intracranial lesions using 3Dtransaltional and 3D-rotational corrections (6D) frameless BrainLAB ExacTrac X-Ray system. Methods: 20 cranial lesions, treated in 3 or 5 fractions, were selected. An infrared (IR) optical positioning system was employed for initial patient setup followed by stereoscopic kV X-ray radiographs for position verification. 6D-translational and rotational shifts were determined to correct patient position. If these shifts were above tolerance (0.7 mm translational and 1° rotational), corrections were applied and another set of X-rays was taken to verify patient position. Dosimetric impact (D95, Dmin,more » Dmax, and Dmean of planning target volume (PTV) compared to original plans) of positioning errors for initial IR setup (XC: Xray Correction) and post-correction (XV: X-ray Verification) was determined in a treatment planning system using a method proposed by Yue et al. (Med. Phys. 33, 21-31 (2006)) with 3D-translational errors only and 6D-translational and rotational errors. Results: Absolute mean translational errors (±standard deviation) for total 92 fractions (XC/XV) were 0.79±0.88/0.19±0.15 mm (lateral), 1.66±1.71/0.18 ±0.16 mm (longitudinal), 1.95±1.18/0.15±0.14 mm (vertical) and rotational errors were 0.61±0.47/0.17±0.15° (pitch), 0.55±0.49/0.16±0.24° (roll), and 0.68±0.73/0.16±0.15° (yaw). The average changes (loss of coverage) in D95, Dmin, Dmax, and Dmean were 4.5±7.3/0.1±0.2%, 17.8±22.5/1.1±2.5%, 0.4±1.4/0.1±0.3%, and 0.9±1.7/0.0±0.1% using 6Dshifts and 3.1±5.5/0.0±0.1%, 14.2±20.3/0.8±1.7%, 0.0±1.2/0.1±0.3%, and 0.7±1.4/0.0±0.1% using 3D-translational shifts only. The setup corrections (XC-XV) improved the PTV coverage by 4.4±7.3% (D95) and 16.7±23.5% (Dmin) using 6D adjustment. Strong correlations were observed between translation errors and deviations in dose coverage for XC. Conclusion: The initial BrainLAB IR system based on rigidity of the mask-frame setup is not sufficient for accurate stereotactic positioning; however, with X-ray imageguidance sub-millimeter accuracy is achieved with negligible deviations in dose coverage. The angular corrections (mean angle summation=1.84°) are important and cause considerable deviations in dose coverage.« less
SU-E-J-117: Verification Method for the Detection Accuracy of Automatic Winston Lutz Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, A; Chan, K; Fee, F
2014-06-01
Purpose: Winston Lutz test (WLT) has been a standard QA procedure performed prior to SRS treatment, to verify the mechanical iso-center setup accuracy upon different Gantry/Couch movements. Several detection algorithms exist,for analyzing the ball-radiation field alignment automatically. However, the accuracy of these algorithms have not been fully addressed. Here, we reveal the possible errors arise from each step in WLT, and verify the software detection accuracy with the Rectilinear Phantom Pointer (RLPP), a tool commonly used for aligning treatment plan coordinate with mechanical iso-center. Methods: WLT was performed with the radio-opaque ball mounted on a MIS and irradiated onto EDR2more » films. The films were scanned and processed with an in-house Matlab program for automatic iso-center detection. Tests were also performed to identify the errors arise from setup, film development and scanning process. The radioopaque ball was then mounted onto the RLPP, and offset laterally and longitudinally in 7 known positions ( 0, ±0.2, ±0.5, ±0.8 mm) manually for irradiations. The gantry and couch was set to zero degree for all irradiation. The same scanned images were processed repeatedly to check the repeatability of the software. Results: Miminal discrepancies (mean=0.05mm) were detected with 2 films overlapped and irradiated but developed separately. This reveals the error arise from film processor and scanner alone. Maximum setup errors were found to be around 0.2mm, by analyzing data collected from 10 irradiations over 2 months. For the known shift introduced using the RLPP, the results agree with the manual offset, and fit linearly (R{sup 2}>0.99) when plotted relative to the first ball with zero shift. Conclusion: We systematically reveal the possible errors arise from each step in WLT, and introduce a simple method to verify the detection accuracy of our in-house software using a clinically available tool.« less
SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montero, A Barragan; Sterpin, E; Lee, J
Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Haefner, Matthias Felix; Giesel, Frederik Lars; Mattke, Matthias; Rath, Daniel; Wade, Moritz; Kuypers, Jacob; Preuss, Alan; Kauczor, Hans-Ulrich; Schenk, Jens-Peter; Debus, Juergen; Sterzing, Florian; Unterhinninghofen, Roland
2018-01-01
We developed a new approach to produce individual immobilization devices for the head based on MRI data and 3D printing technologies. The purpose of this study was to determine positioning accuracy with healthy volunteers. 3D MRI data of the head were acquired for 8 volunteers. In-house developed software processed the image data to generate a surface mesh model of the immobilization mask. After adding an interface for the couch, the fixation setup was materialized using a 3D printer with acrylonitrile butadiene styrene (ABS). Repeated MRI datasets (n=10) were acquired for all volunteers wearing their masks thus simulating a setup for multiple fractions. Using automatic image-to-image registration, displacements of the head were calculated relative to the first dataset (6 degrees of freedom). The production process has been described in detail. The absolute lateral (x), vertical (y) and longitudinal (z) translations ranged between −0.7 and 0.5 mm, −1.8 and 1.4 mm, and −1.6 and 2.4 mm, respectively. The absolute rotations for pitch (x), yaw (y) and roll (z) ranged between −0.9 and 0.8°, −0.5 and 1.1°, and −0.6 and 0.8°, respectively. The mean 3D displacement was 0.9 mm with a standard deviation (SD) of the systematic and random error of 0.2 mm and 0.5 mm, respectively. In conclusion, an almost entirely automated production process of 3D printed immobilization masks for the head derived from MRI data was established. A high level of setup accuracy was demonstrated in a volunteer cohort. Future research will have to focus on workflow optimization and clinical evaluation. PMID:29464087
Haefner, Matthias Felix; Giesel, Frederik Lars; Mattke, Matthias; Rath, Daniel; Wade, Moritz; Kuypers, Jacob; Preuss, Alan; Kauczor, Hans-Ulrich; Schenk, Jens-Peter; Debus, Juergen; Sterzing, Florian; Unterhinninghofen, Roland
2018-01-19
We developed a new approach to produce individual immobilization devices for the head based on MRI data and 3D printing technologies. The purpose of this study was to determine positioning accuracy with healthy volunteers. 3D MRI data of the head were acquired for 8 volunteers. In-house developed software processed the image data to generate a surface mesh model of the immobilization mask. After adding an interface for the couch, the fixation setup was materialized using a 3D printer with acrylonitrile butadiene styrene (ABS). Repeated MRI datasets (n=10) were acquired for all volunteers wearing their masks thus simulating a setup for multiple fractions. Using automatic image-to-image registration, displacements of the head were calculated relative to the first dataset (6 degrees of freedom). The production process has been described in detail. The absolute lateral (x), vertical (y) and longitudinal (z) translations ranged between -0.7 and 0.5 mm, -1.8 and 1.4 mm, and -1.6 and 2.4 mm, respectively. The absolute rotations for pitch (x), yaw (y) and roll (z) ranged between -0.9 and 0.8°, -0.5 and 1.1°, and -0.6 and 0.8°, respectively. The mean 3D displacement was 0.9 mm with a standard deviation (SD) of the systematic and random error of 0.2 mm and 0.5 mm, respectively. In conclusion, an almost entirely automated production process of 3D printed immobilization masks for the head derived from MRI data was established. A high level of setup accuracy was demonstrated in a volunteer cohort. Future research will have to focus on workflow optimization and clinical evaluation.
High performance interconnection between high data rate networks
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.
1992-01-01
The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.
Patni, Nidhi; Burela, Nagarjuna; Pasricha, Rajesh; Goyal, Jaishree; Soni, Tej Prakash; Kumar, T Senthil; Natarajan, T
2017-01-01
To achieve the best possible therapeutic ratio using high-precision techniques (image-guided radiation therapy/volumetric modulated arc therapy [IGRT/VMAT]) of external beam radiation therapy in cases of carcinoma cervix using kilovoltage cone-beam computed tomography (kV-CBCT). One hundred and five patients of gynecological malignancies who were treated with IGRT (IGRT/VMAT) were included in the study. CBCT was done once a week for intensity-modulated radiation therapy and daily in IGRT/VMAT. These images were registered with the planning CT scan images and translational errors were applied and recorded. In all, 2078 CBCT images were studied. The margins of planning target volume were calculated from the variations in the setup. The setup variation was 5.8, 10.3, and 5.6 mm in anteroposterior, superoinferior, and mediolateral direction. This allowed adequate dose delivery to the clinical target volume and the sparing of organ at risks. Daily kV-CBCT is a satisfactory method of accurate patient positioning in treating gynecological cancers with high-precision techniques. This resulted in avoiding geographic miss.
Comparative evaluation of user interfaces for robot-assisted laser phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Becattini, Gabriele; Dellepiane, Massimo; Caldwell, Darwin G
2011-01-01
This research investigates the impact of three different control devices and two visualization methods on the precision, safety and ergonomics of a new medical robotic system prototype for assistive laser phonomicrosurgery. This system allows the user to remotely control the surgical laser beam using either a flight simulator type joystick, a joypad, or a pen display system in order to improve the traditional surgical setup composed by a mechanical micromanipulator coupled with a surgical microscope. The experimental setup and protocol followed to obtain quantitative performance data from the control devices tested are fully described here. This includes sets of path following evaluation experiments conducted with ten subjects with different skills, for a total of 700 trials. The data analysis method and experimental results are also presented, demonstrating an average 45% error reduction when using the joypad and up to 60% error reduction when using the pen display system versus the standard phonomicrosurgery setup. These results demonstrate the new system can provide important improvements in terms of surgical precision, ergonomics and safety. In addition, the evaluation method presented here is shown to support an objective selection of control devices for this application.
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
Pálfalvi, László; Tóth, György; Tokodi, Levente; Márton, Zsuzsanna; Fülöp, József András; Almási, Gábor; Hebling, János
2017-11-27
A hybrid-type terahertz pulse source is proposed for high energy terahertz pulse generation. It is the combination of the conventional tilted-pulse-front setup and a transmission stair-step echelon-faced nonlinear crystal with a period falling in the hundred-micrometer range. The most important advantage of the setup is the possibility of using plane parallel nonlinear optical crystal for producing good-quality, symmetric terahertz beam. Another advantage of the proposed setup is the significant reduction of imaging errors, which is important in the case of wide pump beams that are used in high energy experiments. A one dimensional model was developed for determining the terahertz generation efficiency, and it was used for quantitative comparison between the proposed new hybrid setup and previously introduced terahertz sources. With lithium niobate nonlinear material, calculations predict an approximately ten-fold increase in the efficiency of the presently described hybrid terahertz pulse source with respect to that of the earlier proposed setup, which utilizes a reflective stair-step echelon and a prism shaped nonlinear optical crystal. By using pump pulses of 50 mJ pulse energy, 500 fs pulse length and 8 mm beam spot radius, approximately 1% conversion efficiency and 0.5 mJ terahertz pulse energy can be reached with the newly proposed setup.
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
NASA Astrophysics Data System (ADS)
Taherkhani, Mohammand Amin; Navi, Keivan; Van Meter, Rodney
2018-01-01
Quantum aided Byzantine agreement is an important distributed quantum algorithm with unique features in comparison to classical deterministic and randomized algorithms, requiring only a constant expected number of rounds in addition to giving a higher level of security. In this paper, we analyze details of the high level multi-party algorithm, and propose elements of the design for the quantum architecture and circuits required at each node to run the algorithm on a quantum repeater network (QRN). Our optimization techniques have reduced the quantum circuit depth by 44% and the number of qubits in each node by 20% for a minimum five-node setup compared to the design based on the standard arithmetic circuits. These improvements lead to a quantum system architecture with 160 qubits per node, space-time product (an estimate of the required fidelity) {KQ}≈ 1.3× {10}5 per node and error threshold 1.1× {10}-6 for the total nodes in the network. The evaluation of the designed architecture shows that to execute the algorithm once on the minimum setup, we need to successfully distribute a total of 648 Bell pairs across the network, spread evenly between all pairs of nodes. This framework can be considered a starting point for establishing a road-map for light-weight demonstration of a distributed quantum application on QRNs.
Experimental assessment of a 3-D plenoptic endoscopic imaging system.
Le, Hanh N D; Decker, Ryan; Krieger, Axel; Kang, Jin U
2017-01-01
An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second.
Experimental assessment of a 3-D plenoptic endoscopic imaging system
Le, Hanh N. D.; Decker, Ryan; Krieger, Axel; Kang, Jin U.
2017-01-01
An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second. PMID:29449863
NASA Astrophysics Data System (ADS)
Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan
2017-06-01
Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.
A gamma-ray testing technique for spacecraft. [considering cosmic radiation effects
NASA Technical Reports Server (NTRS)
Gribov, B. S.; Repin, N. N.; Sakovich, V. A.; Sakharov, V. M.
1977-01-01
The simulated cosmic radiation effect on a spacecraft structure is evaluated by gamma ray testing in relation to structural thickness. A drawing of the test set-up is provided and measurement errors are discussed.
Yingying, Zhang; Jiancheng, Lai; Cheng, Yin; Zhenhua, Li
2009-03-01
The dependence of the surface plasmon resonance (SPR) phase difference curve on the complex refractive index of a sample in Kretschmann configuration is discussed comprehensively, based on which a new method is proposed to measure the complex refractive index of turbid liquid. A corresponding experiment setup was constructed to measure the SPR phase difference curve, and the complex refractive index of turbid liquid was determined. By using the setup, the complex refractive indices of Intralipid solutions with concentrations of 5%, 10%, 15%, and 20% are obtained to be 1.3377+0.0005 i, 1.3427+0.0028 i, 1.3476+0.0034 i, and 1.3496+0.0038 i, respectively. Furthermore, the error analysis indicates that the root-mean-square errors of both the real and the imaginary parts of the measured complex refractive index are less than 5x10(-5).
Accounting for optical errors in microtensiometry.
Hinton, Zachary R; Alvarez, Nicolas J
2018-09-15
Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models
de Jesus, Karla; Ayala, Helon V. H.; de Jesus, Kelly; Coelho, Leandro dos S.; Medeiros, Alexandre I.A.; Abraldes, José A.; Vaz, Mário A.P.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo
2018-01-01
Abstract Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances. PMID:29599857
NASA Astrophysics Data System (ADS)
Beavis, Andrew W.; Ward, James W.
2014-03-01
Purpose: In recent years there has been interest in using Computer Simulation within Medical training. The VERT (Virtual Environment for Radiotherapy Training) system is a Flight Simulator for Radiation Oncology professionals, wherein fundamental concepts, techniques and problematic scenarios can be safely investigated. Methods: The system provides detailed simulations of several Linacs and the ability to display DICOM treatment plans. Patients can be mis-positioned with 'set-up errors' which can be explored visually, dosimetrically and using IGRT. Similarly, a variety of Linac calibration and configuration parameters can be altered manually or randomly via controlled errors in the simulated 3D Linac and its component parts. The implication of these can be investigated by following through a treatment scenario or using QC devices available within a Physics software module. Results: One resultant exercise is a systematic mis-calibration of 'lateral laser height' by 2mm. The offset in patient alignment is easily identified using IGRT and once corrected by reference to the 'in-room monitor'. The dosimetric implication is demonstrated to be 0.4% by setting a dosimetry phantom by the lasers (and ignoring TSD information). Finally, the need for recalibration can be shown by the Laser Alignment Phantom or by reference to the front pointer. Conclusions: The VERT system provides a realistic environment for training and enhancing understanding of radiotherapy concepts and techniques. Linac error conditions can be explored in this context and valuable experience gained in a controlled manner in a compressed period of time.
More irregular eye shape in low myopia than in emmetropia.
Tabernero, Juan; Schaeffel, Frank
2009-09-01
To improve the description of the peripheral eye shape in myopia and emmetropia by using a new method for continuous measurement of the peripheral refractive state. A scanning photorefractor was designed to record refractive errors in the vertical pupil meridian across the horizontal visual field (up to +/-45 degrees ). The setup consists of a hot mirror that continuously projects the infrared light from a photoretinoscope under different angles of eccentricity into the eye. The movement of the mirror is controlled by using two stepping motors. Refraction in a group of 17 emmetropic subjects and 11 myopic subjects (mean, -4.3 D; SD, 1.7) was measured without spectacle correction. For the analysis of eye shape, the refractive error versus the eccentricity angles was fitted with different polynomials (from second to tenth order). The new setup presents some important advantages over previous techniques: The subject does not have to change gaze during the measurements, and a continuous profile is obtained rather than discrete points. There was a significant difference in the fitting errors between the subjects with myopia and those with emmetropia. Tenth-order polynomials were required in myopic subjects to achieve a quality of fit similar to that in emmetropic subjects fitted with only sixth-order polynomials. Apparently, the peripheral shape of the myopic eye is more "bumpy." A new setup is presented for obtaining continuous peripheral refraction profiles. It was found that the peripheral retinal shape is more irregular even in only moderately myopic eyes, perhaps because the sclera lost some rigidity even at the early stage of myopia.
Branching random walk with step size coming from a power law
NASA Astrophysics Data System (ADS)
Bhattacharya, Ayan; Subhra Hazra, Rajat; Roy, Parthanil
2015-09-01
In their seminal work, Brunet and Derrida made predictions on the random point configurations associated with branching random walks. We shall discuss the limiting behavior of such point configurations when the displacement random variables come from a power law. In particular, we establish that two prediction of remains valid in this setup and investigate various other issues mentioned in their paper.
Kentgens, Anne-Christianne; Guidi, Marisa; Korten, Insa; Kohler, Lena; Binggeli, Severin; Singer, Florian; Latzin, Philipp; Anagnostopoulou, Pinelopi
2018-05-01
Multiple breath washout (MBW) is a sensitive test to measure lung volumes and ventilation inhomogeneity from infancy on. The commonly used setup for infant MBW, based on ultrasonic flowmeter, requires extensive signal processing, which may reduce robustness. A new setup may overcome some previous limitations but formal validation is lacking. We assessed the feasibility of infant MBW testing with the new setup and compared functional residual capacity (FRC) values of the old and the new setup in vivo and in vitro. We performed MBW in four healthy infants and four infants with cystic fibrosis, as well as in a Plexiglas lung simulator using realistic lung volumes and breathing patterns, with the new (Exhalyzer D, Spiroware 3.2.0, Ecomedics) and the old setup (Exhalyzer D, WBreath 3.18.0, ndd) in random sequence. The technical feasibility of MBW with the new device-setup was 100%. Intra-subject variability in FRC was low in both setups, but differences in FRC between the setups were considerable (mean relative difference 39.7%, range 18.9; 65.7, P = 0.008). Corrections of software settings decreased FRC differences (14.0%, -6.4; 42.3, P = 0.08). Results were confirmed in vitro. MBW measurements with the new setup were feasible in infants. However, despite attempts to correct software settings, outcomes between setups were not interchangeable. Further work is needed before widespread application of the new setup can be recommended. © 2018 Wiley Periodicals, Inc.
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Maso, L; Forbang, R Teboh; Zhang, Y
Purpose: To explore the dosimetric consequences of uncorrected rotational setup errors during SBRT for pancreatic cancer patients. Methods: This was a retrospective study utilizing data from ten (n=10) previously treated SBRT pancreas patients. For each original planning CT, we applied rotational transformations to derive additional CT images representative of possible rotational setup errors. This resulted in 6 different sets of rotational combinations, creating a total of 60 CT planning images. The patients’ clinical dosimetric plans were then applied to their corresponding rotated CT images. The 6 rotation sets encompassed a 3, 2 and 1-degree rotation in each rotational direction andmore » a 3-degree in just the pitch, a 3-degree in just the yaw and a 3-degree in just the roll. After the dosimetric plan was applied to the rotated CT images, the resulting plan was then evaluated and compared with the clinical plan for tumor coverage and normal tissue sparing. Results: PTV coverage, defined here by V33 throughout all of the patients’ clinical plans, ranged from 92–98%. After an n degree rotation in each rotational direction that range decreased to 68–87%, 85–92%, and 88– 94% for n=3, 2 and 1 respectively. Normal tissue sparing defined here by the proximal stomach V15 throughout all of the patients’ clinical plans ranged from 0–8.9 cc. After an n degree rotation in each rotational direction that range increased to 0–17 cc, 0–12 cc, and 0–10 cc for n=3, 2, and 1 respectively. Conclusion: For pancreatic SBRT, small rotational setup errors in the pitch, yaw and roll direction on average caused under dosage to PTV and over dosage to proximal normal tissue. The 1-degree rotation was on average the least detrimental to the normal tissue and the coverage of the PTV. The 3-degree yaw created on average the lowest increase in volume coverage to normal tissue. This research was sponsored by the AAPM Education Council through the AAPM Education and Research Fund for the AAPM Summer Undergraduate Fellowship Program.« less
Developing and implementing a high precision setup system
NASA Astrophysics Data System (ADS)
Peng, Lee-Cheng
The demand for high-precision radiotherapy (HPRT) was first implemented in stereotactic radiosurgery using a rigid, invasive stereotactic head frame. Fractionated stereotactic radiotherapy (SRT) with a frameless device was developed along a growing interest in sophisticated treatment with a tight margin and high-dose gradient. This dissertation establishes the complete management for HPRT in the process of frameless SRT, including image-guided localization, immobilization, and dose evaluation. The most ideal and precise positioning system can allow for ease of relocation, real-time patient movement assessment, high accuracy, and no additional dose in daily use. A new image-guided stereotactic positioning system (IGSPS), the Align RT3C 3D surface camera system (ART, VisionRT), which combines 3D surface images and uses a real-time tracking technique, was developed to ensure accurate positioning at the first place. The uncertainties of current optical tracking system, which causes patient discomfort due to additional bite plates using the dental impression technique and external markers, are found. The accuracy and feasibility of ART is validated by comparisons with the optical tracking and cone-beam computed tomography (CBCT) systems. Additionally, an effective daily quality assurance (QA) program for the linear accelerator and multiple IGSPSs is the most important factor to ensure system performance in daily use. Currently, systematic errors from the phantom variety and long measurement time caused by switching phantoms were discovered. We investigated the use of a commercially available daily QA device to improve the efficiency and thoroughness. Reasonable action level has been established by considering dosimetric relevance and clinic flow. As for intricate treatments, the effect of dose deviation caused by setup errors remains uncertain on tumor coverage and toxicity on OARs. The lack of adequate dosimetric simulations based on the true treatment coordinates from the treatment planning system (TPS) has limited adaptive treatments. A reliable and accurate dosimetric simulation using TPS and in-house software in uncorrected errors has been developed. In SRT, the calculated dose deviation is compared to the original treatment dose with the dose-volume histogram to investigate the dose effect of rotational errors. In summary, this work performed a quality assessment to investigate the overall accuracy of current setup systems. To reach the ideal HPRT, the reliable dosimetric simulation, an effective daily QA program and effective, precise setup systems were developed and validated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Christine H.; Gerry, Emily; Chmura, Steven J.
2015-01-01
Purpose: To calculate planning target volume (PTV) margins for chest wall and regional nodal targets using daily orthogonal kilovolt (kV) imaging and to study residual setup error after kV alignment using volumetric cone-beam computed tomography (CBCT). Methods and Materials: Twenty-one postmastectomy patients were treated with intensity modulated radiation therapy with 7-mm PTV margins. Population-based PTV margins were calculated from translational shifts after daily kV positioning and/or weekly CBCT data for each of 8 patients, whose surgical clips were used as surrogates for target volumes. Errors from kV and CBCT data were mathematically combined to generate PTV margins for 3 simulatedmore » alignment workflows: (1) skin marks alone; (2) weekly kV imaging; and (3) daily kV imaging. Results: The kV data from 613 treatment fractions indicated that a 7-mm uniform margin would account for 95% of daily shifts if patients were positioned using only skin marks. Total setup errors incorporating both kV and CBCT data were larger than those from kV alone, yielding PTV expansions of 7 mm anterior–posterior, 9 mm left–right, and 9 mm superior–inferior. Required PTV margins after weekly kV imaging were similar in magnitude as alignment to skin marks, but rotational adjustments of patients were required in 32% ± 17% of treatments. These rotations would have remained uncorrected without the use of daily kV imaging. Despite the use of daily kV imaging, CBCT data taken at the treatment position indicate that an anisotropic PTV margin of 6 mm anterior–posterior, 4 mm left–right, and 8 mm superior–inferior must be retained to account for residual errors. Conclusions: Cone-beam CT provides additional information on 3-dimensional reproducibility of treatment setup for chest wall targets. Three-dimensional data indicate that a uniform 7-mm PTV margin is insufficient in the absence of daily IGRT. Interfraction movement is greater than suggested by 2-dimensional imaging, thus a margin of at least 4 to 8 mm must be retained despite the use of daily IGRT.« less
Pella, A; Riboldi, M; Tagaste, B; Bianculli, D; Desplanques, M; Fontana, G; Cerveri, P; Seregni, M; Fattori, G; Orecchia, R; Baroni, G
2014-08-01
In an increasing number of clinical indications, radiotherapy with accelerated particles shows relevant advantages when compared with high energy X-ray irradiation. However, due to the finite range of ions, particle therapy can be severely compromised by setup errors and geometric uncertainties. The purpose of this work is to describe the commissioning and the design of the quality assurance procedures for patient positioning and setup verification systems at the Italian National Center for Oncological Hadrontherapy (CNAO). The accuracy of systems installed in CNAO and devoted to patient positioning and setup verification have been assessed using a laser tracking device. The accuracy in calibration and image based setup verification relying on in room X-ray imaging system was also quantified. Quality assurance tests to check the integration among all patient setup systems were designed, and records of daily QA tests since the start of clinical operation (2011) are presented. The overall accuracy of the patient positioning system and the patient verification system motion was proved to be below 0.5 mm under all the examined conditions, with median values below the 0.3 mm threshold. Image based registration in phantom studies exhibited sub-millimetric accuracy in setup verification at both cranial and extra-cranial sites. The calibration residuals of the OTS were found consistent with the expectations, with peak values below 0.3 mm. Quality assurance tests, daily performed before clinical operation, confirm adequate integration and sub-millimetric setup accuracy. Robotic patient positioning was successfully integrated with optical tracking and stereoscopic X-ray verification for patient setup in particle therapy. Sub-millimetric setup accuracy was achieved and consistently verified in daily clinical operation.
Li, Winnie; Purdie, Thomas G; Taremi, Mojgan; Fung, Sharon; Brade, Anthony; Cho, B C John; Hope, Andrew; Sun, Alexander; Jaffray, David A; Bezjak, Andrea; Bissonnette, Jean-Pierre
2011-12-01
To assess intrafractional geometric accuracy of lung stereotactic body radiation therapy (SBRT) patients treated with volumetric image guidance. Treatment setup accuracy was analyzed in 133 SBRT patients treated via research ethics board-approved protocols. For each fraction, a localization cone-beam computed tomography (CBCT) scan was acquired for soft-tissue registration to the internal target volume, followed by a couch adjustment for positional discrepancies greater than 3 mm, verified with a second CBCT scan. CBCT scans were also performed at intrafraction and end fraction. Patient positioning data from 2047 CBCT scans were recorded to determine systematic (Σ) and random (σ) uncertainties, as well as planning target volume margins. Data were further stratified and analyzed by immobilization method (evacuated cushion [n=75], evacuated cushion plus abdominal compression [n=33], or chest board [n=25]) and by patients' Eastern Cooperative Oncology Group performance status (PS): 0 (n=31), 1 (n=70), or 2 (n=32). Using CBCT internal target volume was matched within ±3 mm in 16% of all fractions at localization, 89% at verification, 72% during treatment, and 69% after treatment. Planning target volume margins required to encompass residual setup errors after couch corrections (verification CBCT scans) were 4 mm, and they increased to 5 mm with target intrafraction motion (post-treatment CBCT scans). Small differences (<1 mm) in the cranial-caudal direction of target position were observed between the immobilization cohorts in the localization, verification, intrafraction, and post-treatment CBCT scans (p<0.01). Positional drift varied according to patient PS, with the PS 1 and 2 cohorts drifting out of position by mid treatment more than the PS 0 cohort in the cranial-caudal direction (p=0.04). Image guidance ensures high geometric accuracy for lung SBRT irrespective of immobilization method or PS. A 5-mm setup margin suffices to address intrafraction motion. This setup margin may be further reduced by strategies such as frequent image guidance or volumetric arc therapy to correct or limit intrafraction motion. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falco, Maria Daniela, E-mail: mdanielafalco@hotmail.co; Fontanarosa, Davide; Miceli, Roberto
2011-04-01
Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index hasmore » been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4{sup o}. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22{sup o}). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be accurate, and its registration matrix can be easily translated into the TPS and a low dose is delivered to the patient during image acquisition. These results can help in designing imaging protocols for offline evaluations.« less
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Measuring a Fiber-Optic Delay Line Using a Mode-Locked Laser
NASA Technical Reports Server (NTRS)
Tu, Meirong; McKee, Michael R.; Pak, Kyung S.; Yu, Nan
2010-01-01
The figure schematically depicts a laboratory setup for determining the optical length of a fiber-optic delay line at a precision greater than that obtainable by use of optical time-domain reflectometry or of mechanical measurement of length during the delay-line-winding process. In this setup, the delay line becomes part of the resonant optical cavity that governs the frequency of oscillation of a mode-locked laser. The length can then be determined from frequency-domain measurements, as described below. The laboratory setup is basically an all-fiber ring laser in which the delay line constitutes part of the ring. Another part of the ring - the laser gain medium - is an erbium-doped fiber amplifier pumped by a diode laser at a wavelength of 980 nm. The loop also includes an optical isolator, two polarization controllers, and a polarizing beam splitter. The optical isolator enforces unidirectional lasing. The polarization beam splitter allows light in only one polarization mode to pass through the ring; light in the orthogonal polarization mode is rejected from the ring and utilized as a diagnostic output, which is fed to an optical spectrum analyzer and a photodetector. The photodetector output is fed to a radio-frequency spectrum analyzer and an oscilloscope. The fiber ring laser can generate continuous-wave radiation in non-mode-locked operation or ultrashort optical pulses in mode-locked operation. The mode-locked operation exhibited by this ring is said to be passive in the sense that no electro-optical modulator or other active optical component is used to achieve it. Passive mode locking is achieved by exploiting optical nonlinearity of passive components in such a manner as to obtain ultra-short optical pulses. In this setup, the particular nonlinear optical property exploited to achieve passive mode locking is nonlinear polarization rotation. This or any ring laser can support oscillation in multiple modes as long as sufficient gain is present to overcome losses in the ring. When mode locking is achieved, oscillation occurs in all the modes having the same phase and same polarization. The frequency interval between modes, often denoted the free spectral range (FSR), is given by c/nL, where c is the speed of light in vacuum, n is the effective index of refraction of the fiber, and L is the total length of optical path around the ring. Therefore, the length of the fiber-optic delay line, as part of the length around the ring, can be calculated from the FSRs measured with and without the delay line incorporated into the ring. For this purpose, the FSR measurements are made by use of the optical and radio-frequency spectrum analyzers. In experimentation on a 10-km-long fiber-optic delay line, it was found that this setup made it possible to measure the length to within a fractional error of about 3 10(exp -6), corresponding to a length error of 3 cm. In contrast, measurements by optical time-domain reflectometry and mechanical measurement were found to be much less precise: For optical time-domain reflectometry, the fractional error was found no less than 10(exp -4) (corresponding to a length error of 1 m) and for mechanical measurement, the fractional error was found to be about 10(exp -2) (corresponding to a length error of 100 m).
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Lovelock, D Michael; Hua, Chiaho; Wang, Ping; Hunt, Margie; Fournier-Bidoz, Nathalie; Yenice, Kamil; Toner, Sean; Lutz, Wendell; Amols, Howard; Bilsky, Mark; Fuks, Zvi; Yamada, Yoshiya
2005-08-01
Because of the proximity of the spinal cord, effective radiotherapy of paraspinal tumors to high doses requires highly conformal dose distributions, accurate patient setup, setup verification, and patient immobilization. An immobilization cradle has been designed to facilitate the rapid setup and radiation treatment of patients with paraspinal disease. For all treatments, patients were set up to within 2.5 mm of the design using an amorphous silicon portal imager. Setup reproducibility of the target using the cradle and associated clinical procedures was assessed by measuring the setup error prior to any correction. From 350 anterior/posterior images, and 303 lateral images, the standard deviations, as determined by the imaging procedure, were 1.3 m, 1.6 m, and 2.1 in the ant/post, right/left, and superior/inferior directions. Immobilization was assessed by measuring patient shifts between localization images taken before and after treatment. From 67 ant/post image pairs and 49 lateral image pairs, the standard deviations were found to be less than 1 mm in all directions. Careful patient positioning and immobilization has enabled us to develop a successful clinical program of high dose, conformal radiotherapy of paraspinal disease using a conventional Linac equipped with dynamic multileaf collimation and an amorphous silicon portal imager.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Austin; Ding, George X., E-mail: george.ding@vanderbilt.edu
2014-01-01
Craniospinal irradiation (CSI) requires abutting fields at the cervical spine. Junction shifts are conventionally used to prevent setup error–induced overdosage/underdosage from occurring at the same location. This study compared the dosimetric differences at the cranial-spinal junction between a single-gradient junction technique and conventional multiple-junction shifts and evaluated the effect of setup errors on the dose distributions between both techniques for a treatment course and single fraction. Conventionally, 2 lateral brain fields and a posterior spine field(s) are used for CSI with weekly 1-cm junction shifts. We retrospectively replanned 4 CSI patients using a single-gradient junction between the lateral brain fieldsmore » and the posterior spine field. The fields were extended to allow a minimum 3-cm field overlap. The dose gradient at the junction was achieved using dose painting and intensity-modulated radiation therapy planning. The effect of positioning setup errors on the dose distributions for both techniques was simulated by applying shifts of ± 3 and 5 mm. The resulting cervical spine doses across the field junction for both techniques were calculated and compared. Dose profiles were obtained for both a single fraction and entire treatment course to include the effects of the conventional weekly junction shifts. Compared with the conventional technique, the gradient-dose technique resulted in higher dose uniformity and conformity to the target volumes, lower organ at risk (OAR) mean and maximum doses, and diminished hot spots from systematic positioning errors over the course of treatment. Single-fraction hot and cold spots were improved for the gradient-dose technique. The single-gradient junction technique provides improved conformity, dose uniformity, diminished hot spots, lower OAR mean and maximum dose, and one plan for the entire treatment course, which reduces the potential human error associated with conventional 4-shifted plans.« less
Chang, C M; Fang, K M; Huang, T W; Wang, C T; Cheng, P W
2013-12-01
Studies on the performance of surface registration with electromagnetic tracking systems are lacking in both live surgery and the laboratory setting. This study presents the efficiency in time of the system preparation as well as the navigational accuracy of surface registration using electromagnetic tracking systems. Forty patients with bilateral chronic paranasal pansinusitis underwent endoscopic sinus surgery after undergoing sinus computed tomography scans. The surgeries were performed under electromagnetic navigation guidance after the surface registration had been carried out on all of the patients. The intraoperative measurements indicate the time taken for equipment set-up, surface registration and surgical procedure, as well as the degree of navigation error along 3 axes. The time taken for equipment set-up, surface registration and the surgical procedure was 179 +- 23 seconds, 39 +- 4.8 seconds and 114 +- 36 minutes, respectively. A comparison of the navigation error along the 3 axes showed that the deviation in the medial-lateral direction was significantly less than that in the anterior-posterior and cranial-caudal directions. The procedures of equipment set-up and surface registration in electromagnetic navigation tracking are efficient, convenient and easy to manipulate. The system accuracy is within the acceptable ranges, especially on the medial-lateral axis.
High-resolution smile measurement and control of wavelength-locked QCW and CW laser diode bars
NASA Astrophysics Data System (ADS)
Rosenkrantz, Etai; Yanson, Dan; Klumel, Genady; Blonder, Moshe; Rappaport, Noam; Peleg, Ophir
2018-02-01
High-power linewidth-narrowed applications of laser diode arrays demand high beam quality in the fast, or vertical, axis. This requires very high fast-axis collimation (FAC) quality with sub-mrad angular errors, especially where laser diode bars are wavelength-locked by a volume Bragg grating (VBG) to achieve high pumping efficiency in solid-state and fiber lasers. The micron-scale height deviation of emitters in a bar against the FAC lens causes the so-called smile effect with variable beam pointing errors and wavelength locking degradation. We report a bar smile imaging setup allowing FAC-free smile measurement in both QCW and CW modes. By Gaussian beam simulation, we establish optimum smile imaging conditions to obtain high resolution and accuracy with well-resolved emitter images. We then investigate the changes in the smile shape and magnitude under thermal stresses such as variable duty cycles in QCW mode and, ultimately, CW operation. Our smile measurement setup provides useful insights into the smile behavior and correlation between the bar collimation in QCW mode and operating conditions under CW pumping. With relaxed alignment tolerances afforded by our measurement setup, we can screen bars for smile compliance and potential VBG lockability prior to assembly, with benefits in both lower manufacturing costs and higher yield.
On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation
NASA Astrophysics Data System (ADS)
Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier
2018-01-01
Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.
NASA Astrophysics Data System (ADS)
Möhler, Christian; Russ, Tom; Wohlfahrt, Patrick; Elter, Alina; Runz, Armin; Richter, Christian; Greilich, Steffen
2018-01-01
An experimental setup for consecutive measurement of ion and x-ray absorption in tissue or other materials is introduced. With this setup using a 3D-printed sample container, the reference stopping-power ratio (SPR) of materials can be measured with an uncertainty of below 0.1%. A total of 65 porcine and bovine tissue samples were prepared for measurement, comprising five samples each of 13 tissue types representing about 80% of the total body mass (three different muscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone). Using a standard stoichiometric calibration for single-energy CT (SECT) as well as a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all tissues and then compared to the measured reference. With the SECT approach, the SPRs of all tissues were predicted with a mean error of (-0.84 ± 0.12)% and a mean absolute error of (1.27 ± 0.12)%. In contrast, the DECT-based SPR predictions were overall consistent with the measured reference with a mean error of (-0.02 ± 0.15)% and a mean absolute error of (0.10 ± 0.15)%. Thus, in this study, the potential of DECT to decrease range uncertainty could be confirmed in biological tissue.
Study on Network Error Analysis and Locating based on Integrated Information Decision System
NASA Astrophysics Data System (ADS)
Yang, F.; Dong, Z. H.
2017-10-01
Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.
Some practical problems in implementing randomization.
Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet
2010-06-01
While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.
NASA Technical Reports Server (NTRS)
Blucker, T. J.; Ferry, W. W.
1971-01-01
An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.
Evaluation of RSA set-up from a clinical biplane fluoroscopy system for 3D joint kinematic analysis.
Bonanzinga, Tommaso; Signorelli, Cecilia; Bontempi, Marco; Russo, Alessandro; Zaffagnini, Stefano; Marcacci, Maurilio; Bragonzoni, Laura
2016-01-01
dinamic roentgen stereophotogrammetric analysis (RSA), a technique currently based only on customized radiographic equipment, has been shown to be a very accurate method for detecting three-dimensional (3D) joint motion. The aim of the present work was to evaluate the applicability of an innovative RSA set-up for in vivo knee kinematic analysis, using a biplane fluoroscopic image system. To this end, the Authors describe the set-up as well as a possible protocol for clinical knee joint evaluation. The accuracy of the kinematic measurements is assessed. the Authors evaluated the accuracy of 3D kinematic analysis of the knee in a new RSA set-up, based on a commercial biplane fluoroscopy system integrated into the clinical environment. The study was organized in three main phases: an in vitro test under static conditions, an in vitro test under dynamic conditions reproducing a flexion-extension range of motion (ROM), and an in vivo analysis of the flexion-extension ROM. For each test, the following were calculated, as an indication of the tracking accuracy: mean, minimum, maximum values and standard deviation of the error of rigid body fitting. in terms of rigid body fitting, in vivo test errors were found to be 0.10±0.05 mm. Phantom tests in static and kinematic conditions showed precision levels, for translations and rotations, of below 0.1 mm/0.2° and below 0.5 mm/0.3° respectively for all directions. the results of this study suggest that kinematic RSA can be successfully performed using a standard clinical biplane fluoroscopy system for the acquisition of slow movements of the lower limb. a kinematic RSA set-up using a clinical biplane fluoroscopy system is potentially applicable and provides a useful method for obtaining better characterization of joint biomechanics.
Roshan, N M; Sakeenabi, B
2012-01-01
To evaluate the anxiety in children during occlusal atraumatic restorative treatment (ART) in the primary molars of children; and compare the anxiety for ART procedure performed in school environment and in hospital dental setup. A randomized controlled trial where one dentist placed 120 ART restorations in 60 five- to seven year-olds who had bilateral matched pairs of occlusal carious primary molars. A split-mouth design was used to place restorations in school and in hospital dental setup, which were assigned randomly to contralateral sides. Anxiety was evaluated by Modified Venhem score and the heart rate of the children at five fixed moments during dental treatment. At the entrance of the children into the treatment room, statistically significant difference between treatment in school environment and treatment in hospital dental setup for venham score and heart rate could be found (P = 0.023 and P = 0.037 respectively). At the start of the treatment procedure higher venham score and heart rate was observed in children treated in hospital dental setup in comparison with the children treated in school environment, finding was statistically significant (P = 0.011 and P = 0.029 respectively). During all other three points of treatment, the Venham scores of the children treated in school were lower than those of the children treated in hospital dental setup but statistically not significant (P > 0.05). Positive co-relation between Venham scores and Heart rate was established. No statistically significant relation could be established between boys and girls. Overall anxiety in children for ART treatment was found to be less and the procedure was well accepted irrespective of environment where treatment was performed Hospital dental setup by itself made children anxious during entrance and starting of the treatment when compared to children treated in school environment.
Accuracy of off-line bioluminescence imaging to localize targets in preclinical radiation research.
Tuli, Richard; Armour, Michael; Surmak, Andrew; Reyes, Juvenal; Iordachita, Iulian; Patterson, Michael; Wong, John
2013-04-01
In this study, we investigated the accuracy of using off-line bioluminescence imaging (BLI) and tomography (BLT) to guide irradiation of small soft tissue targets on a small animal radiation research platform (SARRP) with on-board cone beam CT (CBCT) capability. A small glass bulb containing BL cells was implanted as a BL source in the abdomen of 11 mouse carcasses. Bioluminescence imaging and tomography were acquired for each carcass. Six carcasses were setup visually without immobilization and 5 were restrained in position with tape. All carcasses were setup in treatment position on the SARRP where the centroid position of the bulb on CBCT was taken as "truth". In the 2D visual setup, the carcass was setup by aligning the point of brightest luminescence with the vertical beam axis. In the CBCT assisted setup, the pose of the carcass on CBCT was aligned with that on the 2D BL image for setup. For both 2D setup methods, the offset of the bulb centroid on CBCT from the vertical beam axis was measured. In the BLT-CBCT fusion method, the 3D torso on BLT and CBCT was registered and the 3D offset of the respective source centroids was calculated. The setup results were independent of the carcass being immobilized or not due to the onset of rigor mortis. The 2D offset of the perceived BL source position from the CBCT bulb position was 2.3 mm ± 1.3 mm. The 3D offset between BLT and CBCT was 1.5 mm ± 0.9 mm. Given the rigidity of the carcasses, the setup results represent the best that can be achieved with off-line 2D BLI and 3D BLT. The setup uncertainty would require the use of undesirably large margin of 4-5 mm. The results compel the implementation of on-board BLT capability on the SARRP to eliminate setup error and to improve BLT accuracy.
Accuracy of Off-Line Bioluminescence Imaging to Localize Targets in Preclinical Radiation Research
Tuli, Richard; Armour, Michael; Surmak, Andrew; Reyes, Juvenal; Iordachita, Iulian; Patterson, Michael; Wong, John
2013-01-01
In this study, we investigated the accuracy of using off-line bioluminescence imaging (BLI) and tomography (BLT) to guide irradiation of small soft tissue targets on a small animal radiation research platform (SARRP) with on-board cone beam CT (CBCT) capability. A small glass bulb containing BL cells was implanted as a BL source in the abdomen of 11 mouse carcasses. Bioluminescence imaging and tomography were acquired for each carcass. Six carcasses were setup visually without immobilization and 5 were restrained in position with tape. All carcasses were setup in treatment position on the SARRP where the centroid position of the bulb on CBCT was taken as “truth”. In the 2D visual setup, the carcass was setup by aligning the point of brightest luminescence with the vertical beam axis. In the CBCT assisted setup, the pose of the carcass on CBCT was aligned with that on the 2D BL image for setup. For both 2D setup methods, the offset of the bulb centroid on CBCT from the vertical beam axis was measured. In the BLT-CBCT fusion method, the 3D torso on BLT and CBCT was registered and the 3D offset of the respective source centroids was calculated. The setup results were independent of the carcass being immobilized or not due to the onset of rigor mortis. The 2D offset of the perceived BL source position from the CBCT bulb position was 2.3 mm ± 1.3 mm. The 3D offset between BLT and CBCT was 1.5 mm ± 0.9 mm. Given the rigidity of the carcasses, the setup results represent the best that can be achieved with off-line 2D BLI and 3D BLT. The setup uncertainty would require the use of undesirably large margin of 4–5 mm. The results compel the implementation of on-board BLT capability on the SARRP to eliminate setup error and to improve BLT accuracy. PMID:23578189
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
Hoffmans-Holtzer, Nienke A; Hoffmans, Daan; Dahele, Max; Slotman, Ben J; Verbakel, Wilko F A R
2015-03-01
The purpose of this work was to investigate whether adapting gantry and collimator angles can compensate for roll and pitch setup errors during volumetric modulated arc therapy (VMAT) delivery. Previously delivered clinical plans for locally advanced head-and-neck (H&N) cancer (n = 5), localized prostate cancer (n = 2), and whole brain with simultaneous integrated boost to 5 metastases (WB + 5M, n = 1) were used for this study. Known rigid rotations were introduced in the planning CT scans. To compensate for these, in-house software was used to adapt gantry and collimator angles in the plan. Doses to planning target volumes (PTV) and critical organs at risk (OAR) were calculated with and without compensation and compared with the original clinical plan. Measurements in the sagittal plane in a polystyrene phantom using radiochromic film were compared by gamma (γ) evaluation for 2 H&N cancer patients. For H&N plans, the introduction of 2°-roll and 3°-pitch rotations reduced mean PTV coverage from 98.7 to 96.3%. This improved to 98.1% with gantry and collimator compensation. For prostate plans respective figures were 98.4, 97.5, and 98.4%. For WB + 5M, compensation worked less well, especially for smaller volumes and volumes farther from the isocenter. Mean comparative γ evaluation (3%, 1 mm) between original and pitched plans resulted in 86% γ < 1. The corrected plan restored the mean comparison to 96% γ < 1. Preliminary data suggest that adapting gantry and collimator angles is a promising way to correct roll and pitch set-up errors of < 3° during VMAT for H&N and prostate cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxim, Peter G.; Loo, Billy W.; Murphy, James D.
2011-11-15
Purpose: To evaluate the positioning accuracy of an optical positioning system for stereotactic radiosurgery in a pilot experience of optically guided, conventionally fractionated, radiotherapy for paranasal sinus and skull base tumors. Methods and Materials: Before each daily radiotherapy session, the positioning of 28 patients was set up using an optical positioning system. After this initial setup, the patients underwent standard on-board imaging that included daily orthogonal kilovoltage images and weekly cone beam computed tomography scans. Daily translational shifts were made after comparing the on-board images with the treatment planning computed tomography scans. These daily translational shifts represented the daily positionalmore » error in the optical tracking system and were recorded during the treatment course. For 13 patients treated with smaller fields, a three-degree of freedom (3DOF) head positioner was used for more accurate setup. Results: The mean positional error for the optically guided system in patients with and without the 3DOF head positioner was 1.4 {+-} 1.1 mm and 3.9 {+-} 1.6 mm, respectively (p <.0001). The mean positional error drifted 0.11 mm/wk upward during the treatment course for patients using the 3DOF head positioner (p = .057). No positional drift was observed in the patients without the 3DOF head positioner. Conclusion: Our initial clinical experience with optically guided head-and-neck fractionated radiotherapy was promising and demonstrated clinical feasibility. The optically guided setup was especially useful when used in conjunction with the 3DOF head positioner and when it was recalibrated to the shifts using the weekly portal images.« less
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
NASA Astrophysics Data System (ADS)
Sperling, A.; Meyer, M.; Pendsa, S.; Jordan, W.; Revtova, E.; Poikonen, T.; Renoux, D.; Blattner, P.
2018-04-01
Proper characterization of test setups used in industry for testing and traceable measurement of lighting devices by the substitution method is an important task. According to new standards for testing LED lamps, luminaires and modules, uncertainty budgets are requested because in many cases the properties of the device under test differ from the transfer standard used, which may cause significant errors, for example if a LED-based lamp is tested or calibrated in an integrating sphere which was calibrated with a tungsten lamp. This paper introduces a multiple transfer standard, which was designed not only to transfer a single calibration value (e.g. luminous flux) but also to characterize test setups used for LED measurements with additional provided and calibrated output features to enable the application of the new standards.
Koch, Cosima; Posch, Andreas E; Herwig, Christoph; Lendl, Bernhard
2016-12-01
The performance of a fiber optic and an optical conduit in-line attenuated total reflection mid-infrared (IR) probe during in situ monitoring of Penicillium chrysogenum fermentation were compared. The fiber optic probe was connected to a sealed, portable, Fourier transform infrared (FT-IR) process spectrometer via a plug-and-play interface. The optical conduit, on the other hand, was connected to a FT-IR process spectrometer via a knuckled probe with mirrors that had to be adjusted prior to each fermentation, which were purged with dry air. Penicillin V (PenV) and its precursor phenoxyacetic acid (POX) concentrations were determined by online high-performance liquid chromatography and the obtained concentrations were used as reference to build partial least squares regression models. Cross-validated root-mean-square errors of prediction were found to be 0.2 g L -1 (POX) and 0.19 g L -1 (PenV) for the fiber optic setup and 0.17 g L -1 (both POX and PenV) for the conduit setup. Higher noise-levels and spectrum-to-spectrum variations of the fiber optic setup lead to higher noise of estimated (i.e., unknown) POX and PenV concentrations than was found for the conduit setup. It seems that trade-off has to be made between ease of handling (fiber optic setup) and measurement accuracy (optical conduit setup) when choosing one of these systems for bioprocess monitoring. © The Author(s) 2016.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
NASA Astrophysics Data System (ADS)
Wee, Loo Kang
2012-05-01
We develop an Easy Java Simulation (EJS) model for students to experience the physics of idealized one-dimensional collision carts. The physics model is described and simulated by both continuous dynamics and discrete transition during collision. In designing the simulations, we discuss briefly three pedagogical considerations namely (1) a consistent simulation world view with a pen and paper representation, (2) a data table, scientific graphs and symbolic mathematical representations for ease of data collection and multiple representational visualizations and (3) a game for simple concept testing that can further support learning. We also suggest using a physical world setup augmented by simulation by highlighting three advantages of real collision carts equipment such as a tacit 3D experience, random errors in measurement and the conceptual significance of conservation of momentum applied to just before and after collision. General feedback from the students has been relatively positive, and we hope teachers will find the simulation useful in their own classes.
Binny, Diana; Lancaster, Craig M; Trapp, Jamie V; Crowe, Scott B
2017-09-01
This study utilizes process control techniques to identify action limits for TomoTherapy couch positioning quality assurance tests. A test was introduced to monitor accuracy of the applied couch offset detection in the TomoTherapy Hi-Art treatment system using the TQA "Step-Wedge Helical" module and MVCT detector. Individual X-charts, process capability (cp), probability (P), and acceptability (cpk) indices were used to monitor a 4-year couch IEC offset data to detect systematic and random errors in the couch positional accuracy for different action levels. Process capability tests were also performed on the retrospective data to define tolerances based on user-specified levels. A second study was carried out whereby physical couch offsets were applied using the TQA module and the MVCT detector was used to detect the observed variations. Random and systematic variations were observed for the SPC-based upper and lower control limits, and investigations were carried out to maintain the ongoing stability of the process for a 4-year and a three-monthly period. Local trend analysis showed mean variations up to ±0.5 mm in the three-monthly analysis period for all IEC offset measurements. Variations were also observed in the detected versus applied offsets using the MVCT detector in the second study largely in the vertical direction, and actions were taken to remediate this error. Based on the results, it was recommended that imaging shifts in each coordinate direction be only applied after assessing the machine for applied versus detected test results using the step helical module. User-specified tolerance levels of at least ±2 mm were recommended for a test frequency of once every 3 months to improve couch positional accuracy. SPC enables detection of systematic variations prior to reaching machine tolerance levels. Couch encoding system recalibrations reduced variations to user-specified levels and a monitoring period of 3 months using SPC facilitated in detecting systematic and random variations. SPC analysis for couch positional accuracy enabled greater control in the identification of errors, thereby increasing confidence levels in daily treatment setups. © 2017 Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Performance of a visuomotor walking task in an augmented reality training setting.
Haarman, Juliet A M; Choi, Julia T; Buurke, Jaap H; Rietman, Johan S; Reenalda, Jasper
2017-12-01
Visual cues can be used to train walking patterns. Here, we studied the performance and learning capacities of healthy subjects executing a high-precision visuomotor walking task, in an augmented reality training set-up. A beamer was used to project visual stepping targets on the walking surface of an instrumented treadmill. Two speeds were used to manipulate task difficulty. All participants (n = 20) had to change their step length to hit visual stepping targets with a specific part of their foot, while walking on a treadmill over seven consecutive training blocks, each block composed of 100 stepping targets. Distance between stepping targets was varied between short, medium and long steps. Training blocks could either be composed of random stepping targets (no fixed sequence was present in the distance between the stepping targets) or sequenced stepping targets (repeating fixed sequence was present). Random training blocks were used to measure non-specific learning and sequenced training blocks were used to measure sequence-specific learning. Primary outcome measures were performance (% of correct hits), and learning effects (increase in performance over the training blocks: both sequence-specific and non-specific). Secondary outcome measures were the performance and stepping-error in relation to the step length (distance between stepping target). Subjects were able to score 76% and 54% at first try for lower speed (2.3 km/h) and higher speed (3.3 km/h) trials, respectively. Performance scores did not increase over the course of the trials, nor did the subjects show the ability to learn a sequenced walking task. Subjects were better able to hit targets while increasing their step length, compared to shortening it. In conclusion, augmented reality training by use of the current set-up was intuitive for the user. Suboptimal feedback presentation might have limited the learning effects of the subjects. Copyright © 2017 Elsevier B.V. All rights reserved.
Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir
2009-06-01
Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.
Contributions to the problem of piezoelectric accelerometer calibration. [using lock-in voltmeter
NASA Technical Reports Server (NTRS)
Jakab, I.; Bordas, A.
1974-01-01
After discussing the principal calibration methods for piezoelectric accelerometers, an experimental setup for accelerometer calibration by the reciprocity method is described It is shown how the use of a lock-in voltmeter eliminates errors due to viscous damping and electrical loading.
Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-06-10
Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.
Driving Performance Under Alcohol in Simulated Representative Driving Tasks
Kenntner-Mabiala, Ramona; Kaussner, Yvonne; Jagiellowicz-Kaufmann, Monika; Hoffmann, Sonja; Krüger, Hans-Peter
2015-01-01
Abstract Comparing drug-induced driving impairments with the effects of benchmark blood alcohol concentrations (BACs) is an approved approach to determine the clinical relevance of findings for traffic safety. The present study aimed to collect alcohol calibration data to validate findings of clinical trials that were derived from a representative test course in a dynamic driving simulator. The driving performance of 24 healthy volunteers under placebo and with 0.05% and 0.08% BACs was measured in a double-blind, randomized, crossover design. Trained investigators assessed the subjects’ driving performance and registered their driving errors. Various driving parameters that were recorded during the simulation were also analyzed. Generally, the participants performed worse on the test course (P < 0.05 for the investigators’ assessment) under the influence of alcohol. Consistent with the relevant literature, lane-keeping performance parameters were sensitive to the investigated BACs. There were significant differences between the alcohol and placebo conditions in most of the parameters analyzed. However, the total number of errors was the only parameter discriminating significantly between all three BAC conditions. In conclusion, data show that the present experimental setup is suitable for future psychopharmacological research. Thereby, for each drug to be investigated, we recommend to assess a profile of various parameters that address different levels of driving. On the basis of this performance profile, the total number of driving errors is recommended as the primary endpoint. However, this overall endpoint should be completed by a specifically sensitive parameter that is chosen depending on the effect known to be induced by the tested drug. PMID:25689289
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomic, N; Bekerat, H; Seuntjens, J
Purpose: Both kVp settings and geometric distribution of various materials lead to significant change of the HU values, showing the largest discrepancy for high-Z materials and for the lowest CT scanning kVp setting. On the other hand, the dose distributions around low-energy brachytherapy sources are highly dependent on the architecture and composition of tissue heterogeneities in and around the implant. Both measurements and Monte Carlo calculations show that improper tissue characterization may lead to calculated dose errors of 90% for low energy and around 10% for higher energy photons. We investigated the ability of dual-energy CT (DECT) to characterize moremore » accurately tissue equivalent materials. Methods: We used the RMI-467 heterogeneity phantom scanned in DECT mode with 3 different set-ups: first, we placed high electron density (ED) plugs within the outer ring of the phantom; then we arranged high ED plugs within the inner ring; and finally ED plugs were randomly distributed. All three setups were scanned with the same DECT technique using a single-source DECT scanner with fast kVp switching (Discovery CT750HD; GE Healthcare). Images were transferred to a GE Advantage workstation for DECT analysis. Spectral Hounsfield unit curves (SHUACs) were then generated from 50 to 140-keV, in 10-keV increments, for each plug. Results: The dynamic range of Hounsfield units shrinks with increased photon energy as the attenuation coefficients decrease. Our results show that the spread of HUs for the three different geometrical setups is the smallest at 80 keV. Furthermore, among all the energies and all materials presented, the largest difference appears at high Z tissue equivalent plugs. Conclusion: Our results suggest that dose calculations at both megavoltage and low photon energies could benefit in the vicinity of bony structures if the 80 keV reconstructed monochromatic CT image is used with the DECT protocol utilized in this work.« less
Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D
2017-11-01
This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.
Comparison of Oral Reading Errors between Contextual Sentences and Random Words among Schoolchildren
ERIC Educational Resources Information Center
Khalid, Nursyairah Mohd; Buari, Noor Halilah; Chen, Ai-Hong
2017-01-01
This paper compares the oral reading errors between the contextual sentences and random words among schoolchildren. Two sets of reading materials were developed to test the oral reading errors in 30 schoolchildren (10.00±1.44 years). Set A was comprised contextual sentences while Set B encompassed random words. The schoolchildren were asked to…
2012-01-01
Background To investigate geometric and dosimetric accuracy of frame-less image-guided radiosurgery (IG-RS) for brain metastases. Methods and materials Single fraction IG-RS was practiced in 72 patients with 98 brain metastases. Patient positioning and immobilization used either double- (n = 71) or single-layer (n = 27) thermoplastic masks. Pre-treatment set-up errors (n = 98) were evaluated with cone-beam CT (CBCT) based image-guidance (IG) and were corrected in six degrees of freedom without an action level. CBCT imaging after treatment measured intra-fractional errors (n = 64). Pre- and post-treatment errors were simulated in the treatment planning system and target coverage and dose conformity were evaluated. Three scenarios of 0 mm, 1 mm and 2 mm GTV-to-PTV (gross tumor volume, planning target volume) safety margins (SM) were simulated. Results Errors prior to IG were 3.9 mm ± 1.7 mm (3D vector) and the maximum rotational error was 1.7° ± 0.8° on average. The post-treatment 3D error was 0.9 mm ± 0.6 mm. No differences between double- and single-layer masks were observed. Intra-fractional errors were significantly correlated with the total treatment time with 0.7mm±0.5mm and 1.2mm±0.7mm for treatment times ≤23 minutes and >23 minutes (p<0.01), respectively. Simulation of RS without image-guidance reduced target coverage and conformity to 75% ± 19% and 60% ± 25% of planned values. Each 3D set-up error of 1 mm decreased target coverage and dose conformity by 6% and 10% on average, respectively, with a large inter-patient variability. Pre-treatment correction of translations only but not rotations did not affect target coverage and conformity. Post-treatment errors reduced target coverage by >5% in 14% of the patients. A 1 mm safety margin fully compensated intra-fractional patient motion. Conclusions IG-RS with online correction of translational errors achieves high geometric and dosimetric accuracy. Intra-fractional errors decrease target coverage and conformity unless compensated with appropriate safety margins. PMID:22531060
Random measurement error: Why worry? An example of cardiovascular risk factors.
Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H
2018-01-01
With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, Ozer, E-mail: oalgan@ouhsc.edu; Jamgade, Ambarish; Ali, Imad
2012-01-01
The purpose of this study was to evaluate the impact of daily setup error and interfraction organ motion on the overall dosimetric radiation treatment plans. Twelve patients undergoing definitive intensity-modulated radiation therapy (IMRT) treatments for prostate cancer were evaluated in this institutional review board-approved study. Each patient had fiducial markers placed into the prostate gland before treatment planning computed tomography scan. IMRT plans were generated using the Eclipse treatment planning system. Each patient was treated to a dose of 8100 cGy given in 45 fractions. In this study, we retrospectively created a plan for each treatment day that had amore » shift available. To calculate the dose, the patient would have received under this plan, we mathematically 'negated' the shift by moving the isocenter in the exact opposite direction of the shift. The individualized daily plans were combined to generate an overall plan sum. The dose distributions from these plans were compared with the treatment plans that were used to treat the patients. Three-hundred ninety daily shifts were negated and their corresponding plans evaluated. The mean isocenter shift based on the location of the fiducial markers was 3.3 {+-} 6.5 mm to the right, 1.6 {+-} 5.1 mm posteriorly, and 1.0 {+-} 5.0 mm along the caudal direction. The mean D95 doses for the prostate gland when setup error was corrected and uncorrected were 8228 and 7844 cGy (p < 0.002), respectively, and for the planning target volume (PTV8100) was 8089 and 7303 cGy (p < 0.001), respectively. The mean V95 values when patient setup was corrected and uncorrected were 99.9% and 87.3%, respectively, for the PTV8100 volume (p < 0.0001). At an individual patient level, the difference in the D95 value for the prostate volume could be >1200 cGy and for the PTV8100 could approach almost 2000 cGy when comparing corrected against uncorrected plans. There was no statistically significant difference in the D35 parameter for the surrounding normal tissue except for the dose received by the penile bulb and the right hip. Our dosimetric evaluation suggests significant underdosing with inaccurate target localization and emphasizes the importance of accurate patient setup and target localization. Further studies are needed to evaluate the impact of intrafraction organ motion, rotation, and deformation on doses delivered to target volumes.« less
A Simple and Reliable Setup for Monitoring Corrosion Rate of Steel Rebars in Concrete
Jibran, Mohammed Abdul Azeem; Azad, Abul Kalam
2014-01-01
The accuracy in the measurement of the rate of corrosion of steel in concrete depends on many factors. The high resistivity of concrete makes the polarization data erroneous due to the Ohmic drop. The other source of error is the use of an arbitrarily assumed value of the Stern-Geary constant for calculating corrosion current density. This paper presents the outcomes of a research work conducted to develop a reliable and low-cost experimental setup and a simple calculation procedure that can be utilised to calculate the corrosion current density considering the Ohmic drop compensation and the actual value of the Stern-Geary constants calculated using the polarization data. The measurements conducted on specimens corroded to different levels indicate the usefulness of the developed setup to determine the corrosion current density with and without Ohmic drop compensation. PMID:24526907
NASA Technical Reports Server (NTRS)
Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.
2012-01-01
This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
Sine-Bar Attachment For Machine Tools
NASA Technical Reports Server (NTRS)
Mann, Franklin D.
1988-01-01
Sine-bar attachment for collets, spindles, and chucks helps machinists set up quickly for precise angular cuts that require greater precision than provided by graduations of machine tools. Machinist uses attachment to index head, carriage of milling machine or lathe relative to table or turning axis of tool. Attachment accurate to 1 minute or arc depending on length of sine bar and precision of gauge blocks in setup. Attachment installs quickly and easily on almost any type of lathe or mill. Requires no special clamps or fixtures, and eliminates many trial-and-error measurements. More stable than improvised setups and not jarred out of position readily.
Broadband microwave spectroscopy in Corbino geometry at 3He temperatures
NASA Astrophysics Data System (ADS)
Steinberg, Katrin; Scheffler, Marc; Dressel, Martin
2012-02-01
A broadband microwave spectrometer has been constructed to determine the complex conductivity of thin metal films at frequencies from 45 MHz to 20 GHz working in the temperature range from 0.45 K to 2 K (in a 3He cryostat). The setup follows the Corbino approach: a vector network analyzer measures the complex reflection coefficient of a microwave signal hitting the sample as termination of a coaxial transmission line. As the calibration of the setup limits the achievable resolution, we discuss the sources of error hampering different types of calibration. Test measurements of the complex conductivity of a heavy-fermion material demonstrate the applicability of the calibration procedures.
Tucker, Neil; Reid, Duncan; McNair, Peter
2007-01-01
The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20-49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2 degrees within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6 degrees and 3.3 degrees , respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system.
Tucker, Neil; Reid, Duncan; McNair, Peter
2007-01-01
The slump test is a tool to assess the mechanosensitivity of the neuromeningeal structures within the vertebral canal. While some studies have investigated the reliability of aspects of this test within the same day, few have assessed the reliability across days. Therefore, the purpose of this pilot study was to investigate reliability when measuring active knee extension range of motion (AROM) in a modified slump test position within trials on a single day and across days. Ten male and ten female asymptomatic subjects, ages 20–49 (mean age 30.1, SD 6.4) participated in the study. Knee extension AROM in a modified slump position with the cervical spine in a flexed position and then in an extended position was measured via three trials on two separate days. Across three trials, knee extension AROM increased significantly with a mean magnitude of 2° within days for both cervical spine positions (P>0.05). The findings showed that there was no statistically significant difference in knee extension AROM measurements across days (P>0.05). The intraclass correlation coefficients for the mean of the three trials across days were 0.96 (lower limit 95% CI: 0.90) with the cervical spine flexed and 0.93 (lower limit 95% CI: 0.83) with cervical extension. Measurement error was calculated by way of the typical error and 95% limits of agreement, and visually represented in Bland and Altman plots. The typical error for the cervical flexed and extended positions averaged across trials was 2.6° and 3.3°, respectively. The limits of agreement were narrow, and the Bland and Altman plots also showed minimal bias in the joint angles across days with a random distribution of errors across the range of measured angles. This study demonstrated that knee extension AROM could be reliably measured across days in subjects without pathology and that the measurement error was acceptable. Implications of variability over multiple trials are discussed. The modified set-up for the test using the Kincom dynamometer and elevated thigh position may be useful to clinical researchers in determining the mechanosensitivity of the nervous system. PMID:19066666
Accurate initial conditions in mixed dark matter-baryon simulations
NASA Astrophysics Data System (ADS)
Valkenburg, Wessel; Villaescusa-Navarro, Francisco
2017-06-01
We quantify the error in the results of mixed baryon-dark-matter hydrodynamic simulations, stemming from outdated approximations for the generation of initial conditions. The error at redshift 0 in contemporary large simulations is of the order of few to 10 per cent in the power spectra of baryons and dark matter, and their combined total-matter power spectrum. After describing how to properly assign initial displacements and peculiar velocities to multiple species, we review several approximations: (1) using the total-matter power spectrum to compute displacements and peculiar velocities of both fluids, (2) scaling the linear redshift-zero power spectrum back to the initial power spectrum using the Newtonian growth factor ignoring homogeneous radiation, (3) using a mix of general-relativistic gauges so as to approximate Newtonian gravity, namely longitudinal-gauge velocities with synchronous-gauge densities and (4) ignoring the phase-difference in the Fourier modes for the offset baryon grid, relative to the dark-matter grid. Three of these approximations do not take into account that dark matter and baryons experience a scale-dependent growth after photon decoupling, which results in directions of velocity that are not the same as their direction of displacement. We compare the outcome of hydrodynamic simulations with these four approximations to our reference simulation, all setup with the same random seed and simulated using gadget-III.
Site‐specific tolerance tables and indexing device to improve patient setup reproducibility
James, Joshua A.; Cetnar, Ashley J.; McCullough, Mark A.; Wang, Brian
2015-01-01
While the implementation of tools such as image‐guidance and immobilization devices have helped to prevent geometric misses in radiation therapy, many treatments remain prone to error if these items are not available, not utilized for every fraction, or are misused. The purpose of this project is to design a set of site‐specific treatment tolerance tables to be applied to the treatment couch for use in a record and verify (R&V) system that will insure accurate patient setup with minimal workflow interruption. This project also called for the construction of a simple indexing device to help insure reproducible patient setup for patients that could not be indexed with existing equipment. The tolerance tables were created by retrospective analysis on a total of 66 patients and 1,308 treatments, separating them into five categories based on disease site: lung, head and neck (H&N), breast, pelvis, and abdomen. Couch parameter tolerance tables were designed to encompass 95% of treatments, and were generated by calculating the standard deviation of couch vertical, longitudinal, and lateral values using the first day of treatment as a baseline. We also investigated an alternative method for generating the couch tolerances by updating the baseline values when patient position was verified with image guidance. This was done in order to adapt the tolerances to any gradual changes in patient setup that would not correspond with a mistreatment. The tolerance tables and customizable indexing device were then implemented for a trial period in order to determine the feasibility of the system. During this trial period we collected data from 1,054 fractions from 65 patients. We then analyzed the number of treatments that would have been out of tolerance, as well as whether or not the tolerances or setup techniques should be adjusted. When the couch baseline values were updated with every imaging fraction, the average rate of tolerance violations was 10% for the lung, H&N, abdomen, and pelvis treatments. Using the indexing device, tolerances for patients with pelvic disease decreased (e.g., from 5.3 cm to 4.3 cm longitudinally). Unfortunately, the results from breast patients were highly variable due to the complexity of the setup technique, making the couch an inadequate surrogate for measuring setup accuracy. In summary, we have developed a method to turn the treatment couch parameters within the R&V system into a useful alert tool, which can be implemented at other institutions, in order to identify potential errors in patient setup. PACS numbers: 87.53Kn, 87.55.kh, 87.55.ne, 87.55.km, 87.55K‐, 87.55.Qr PMID:26103475
NASA Technical Reports Server (NTRS)
Moore, J. T.
1985-01-01
Data input for the AVE-SESAME I experiment are utilized to describe the effects of random errors in rawinsonde data on the computation of ageostrophic winds. Computer-generated random errors for wind direction and speed and temperature are introduced into the station soundings at 25 mb intervals from which isentropic data sets are created. Except for the isallobaric and the local wind tendency, all winds are computed for Apr. 10, 1979 at 2000 GMT. Divergence fields reveal that the isallobaric and inertial-geostrophic-advective divergences are less affected by rawinsonde random errors than the divergence of the local wind tendency or inertial-advective winds.
Yao, Lihong; Zhu, Lihong; Wang, Junjie; Liu, Lu; Zhou, Shun; Jiang, ShuKun; Cao, Qianqian; Qu, Ang; Tian, Suqing
2015-04-26
To improve the delivery of radiotherapy in gynecologic malignancies and to minimize the irradiation of unaffected tissues by using daily kilovoltage cone beam computed tomography (kV-CBCT) to reduce setup errors. Thirteen patients with gynecologic cancers were treated with postoperative volumetric-modulated arc therapy (VMAT). All patients had a planning CT scan and daily CBCT during treatment. Automatic bone anatomy matching was used to determine initial inter-fraction positioning error. Positional correction on a six-degrees-of-freedom (6DoF) couch was followed by a second scan to calculate the residual inter-fraction error, and a post-treatment scan assessed intra-fraction motion. The margins of the planning target volume (MPTV) were calculated from these setup variations and the effect of margin size on normal tissue sparing was evaluated. In total, 573 CBCT scans were acquired. Mean absolute pre-/post-correction errors were obtained in all six planes. With 6DoF couch correction, the MPTV accounting for intra-fraction errors was reduced by 3.8-5.6 mm. This permitted a reduction in the maximum dose to the small intestine, bladder and femoral head (P=0.001, 0.035 and 0.032, respectively), the average dose to the rectum, small intestine, bladder and pelvic marrow (P=0.003, 0.000, 0.001 and 0.000, respectively) and markedly reduced irradiated normal tissue volumes. A 6DoF couch in combination with daily kV-CBCT can considerably improve positioning accuracy during VMAT treatment in gynecologic malignancies, reducing the MPTV. The reduced margin size permits improved normal tissue sparing and a smaller total irradiated volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dellamonica, D.; Luo, G.; Ding, G.
Purpose: Setup errors on the order of millimeters may cause under-dosing of targets and significant changes in dose to critical structures especially when planning with tight margins in stereotactic radiosurgery. This study evaluates the effects of these types of patient positioning uncertainties on planning target volume (PTV) coverage and cochlear dose for stereotactic treatments of acoustic neuromas. Methods: Twelve acoustic neuroma patient treatment plans were retrospectively evaluated in Brainlab iPlan RT Dose 4.1.3. All treatment beams were shaped by HDMLC from a Varian TX machine. Seven patients had planning margins of 2mm, five had 1–1.5mm. Six treatment plans were createdmore » for each patient simulating a 1mm setup error in six possible directions: anterior-posterior, lateral, and superiorinferior. The arcs and HDMLC shapes were kept the same for each plan. Change in PTV coverage and mean dose to the cochlea was evaluated for each plan. Results: The average change in PTV coverage for the 72 simulated plans was −1.7% (range: −5 to +1.1%). The largest average change in coverage was observed for shifts in the patient's superior direction (−2.9%). The change in mean cochlear dose was highly dependent upon the direction of the shift. Shifts in the anterior and superior direction resulted in an average increase in dose of 13.5 and 3.8%, respectively, while shifts in the posterior and inferior direction resulted in an average decrease in dose of 17.9 and 10.2%. The average change in dose to the cochlea was 13.9% (range: 1.4 to 48.6%). No difference was observed based on the size of the planning margin. Conclusion: This study indicates that if the positioning uncertainty is kept within 1mm the setup errors may not result in significant under-dosing of the acoustic neuroma target volumes. However, the change in mean cochlear dose is highly dependent upon the direction of the shift.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worm, Esben S., E-mail: esbeworm@rm.dk; Department of Medical Physics, Aarhus University Hospital, Aarhus; Hoyer, Morten
2012-05-01
Purpose: To develop and evaluate accurate and objective on-line patient setup based on a novel semiautomatic technique in which three-dimensional marker trajectories were estimated from two-dimensional cone-beam computed tomography (CBCT) projections. Methods and Materials: Seven treatment courses of stereotactic body radiotherapy for liver tumors were delivered in 21 fractions in total to 6 patients by a linear accelerator. Each patient had two to three gold markers implanted close to the tumors. Before treatment, a CBCT scan with approximately 675 two-dimensional projections was acquired during a full gantry rotation. The marker positions were segmented in each projection. From this, the three-dimensionalmore » marker trajectories were estimated using a probability based method. The required couch shifts for patient setup were calculated from the mean marker positions along the trajectories. A motion phantom moving with known tumor trajectories was used to examine the accuracy of the method. Trajectory-based setup was retrospectively used off-line for the first five treatment courses (15 fractions) and on-line for the last two treatment courses (6 fractions). Automatic marker segmentation was compared with manual segmentation. The trajectory-based setup was compared with setup based on conventional CBCT guidance on the markers (first 15 fractions). Results: Phantom measurements showed that trajectory-based estimation of the mean marker position was accurate within 0.3 mm. The on-line trajectory-based patient setup was performed within approximately 5 minutes. The automatic marker segmentation agreed with manual segmentation within 0.36 {+-} 0.50 pixels (mean {+-} SD; pixel size, 0.26 mm in isocenter). The accuracy of conventional volumetric CBCT guidance was compromised by motion smearing ({<=}21 mm) that induced an absolute three-dimensional setup error of 1.6 {+-} 0.9 mm (maximum, 3.2) relative to trajectory-based setup. Conclusions: The first on-line clinical use of trajectory estimation from CBCT projections for precise setup in stereotactic body radiotherapy was demonstrated. Uncertainty in the conventional CBCT-based setup procedure was eliminated with the new method.« less
The detection error of thermal test low-frequency cable based on M sequence correlation algorithm
NASA Astrophysics Data System (ADS)
Wu, Dongliang; Ge, Zheyang; Tong, Xin; Du, Chunlin
2018-04-01
The problem of low accuracy and low efficiency of off-line detecting on thermal test low-frequency cable faults could be solved by designing a cable fault detection system, based on FPGA export M sequence code(Linear feedback shift register sequence) as pulse signal source. The design principle of SSTDR (Spread spectrum time-domain reflectometry) reflection method and hardware on-line monitoring setup figure is discussed in this paper. Testing data show that, this detection error increases with fault location of thermal test low-frequency cable.
NASA Technical Reports Server (NTRS)
Gejji, Raghvendra, R.
1992-01-01
Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
Errors in radial velocity variance from Doppler wind lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Errors in radial velocity variance from Doppler wind lidar
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...
2016-08-29
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stynes, J. K.; Ihas, B.
2012-04-01
The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less
Interoperative efficiency in minimally invasive surgery suites.
van Det, M J; Meijerink, W J H J; Hoff, C; Pierie, J P E N
2009-10-01
Performing minimally invasive surgery (MIS) in a conventional operating room (OR) requires additional specialized equipment otherwise stored outside the OR. Before the procedure, the OR team must collect, prepare, and connect the equipment, then take it away afterward. These extra tasks pose a thread to OR efficiency and may lengthen turnover times. The dedicated MIS suite has permanently installed laparoscopic equipment that is operational on demand. This study presents two experiments that quantify the superior efficiency of the MIS suite in the interoperative period. Preoperative setup and postoperative breakdown times in the conventional OR and the MIS suite in an experimental setting and in daily practice were analyzed. In the experimental setting, randomly chosen OR teams simulated the setup and breakdown for a standard laparoscopic cholecystectomy (LC) and a complex laparoscopic sigmoid resection (LS). In the clinical setting, the interoperative period for 66 LCs randomly assigned to the conventional OR or the MIS suite were analyzed. In the experimental setting, the setup and breakdown times were significantly shorter in the MIS suite. The difference between the two types of OR increased for the complex procedure: 2:41 min for the LC (p < 0.001) and 10:47 min for the LS (p < 0.001). In the clinical setting, the setup and breakdown times as a whole were not reduced in the MIS suite. Laparoscopic setup and breakdown times were significantly shorter in the MIS suite (mean difference, 5:39 min; p < 0.001). Efficiency during the interoperative period is significantly improved in the MIS suite. The OR nurses' tasks are relieved, which may reduce mental and physical workload and improve job satisfaction and patient safety. Due to simultaneous tasks of other disciplines, an overall turnover time reduction could not be achieved.
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
SU-E-J-34: Setup Accuracy in Spine SBRT Using CBCT 6D Image Guidance in Comparison with 6D ExacTrac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Z; Yip, S; Lewis, J
2015-06-15
Purpose Volumetric information of the spine captured on CBCT can potentially improve the accuracy in spine SBRT setup that has been commonly performed through 2D radiographs. This work evaluates the setup accuracy in spine SBRT using 6D CBCT image guidance that recently became available on Varian systems. Methods ExacTrac radiographs have been commonly used for Spine SBRT setup. The setup process involves first positioning patients with lasers followed by localization imaging, registration, and repositioning. Verification images are then taken providing the residual errors (ExacTracRE) before beam on. CBCT verification is also acquired in our institute. The availability of both ExacTracmore » and CBCT verifications allows a comparison study. 41 verification CBCT of 16 patients were retrospectively registered with the planning CT enabling 6D corrections, giving CBCT residual errors (CBCTRE) which were compared with ExacTracRE. Results The RMS discrepancies between CBCTRE and ExacTracRE are 1.70mm, 1.66mm, 1.56mm in vertical, longitudinal and lateral directions and 0.27°, 0.49°, 0.35° in yaw, roll and pitch respectively. The corresponding mean discrepancies (and standard deviation) are 0.62mm (1.60mm), 0.00mm (1.68mm), −0.80mm (1.36mm) and 0.05° (0.58°), 0.11° (0.48°), −0.16° (0.32°). Of the 41 CBCT, 17 had high-Z surgical implants. No significant difference in ExacTrac-to-CBCT discrepancy was observed between patients with and without the implants. Conclusion Multiple factors can contribute to the discrepancies between CBCT and ExacTrac: 1) the imaging iso-centers of the two systems, while calibrated to coincide, can be different; 2) the ROI used for registration can be different especially if ribs were included in ExacTrac images; 3) small patient motion can occur between the two verification image acquisitions; 4) the algorithms can be different between CBCT (volumetric) and ExacTrac (radiographic) registrations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, B; Maquilan, G; Anders, M
Purpose: Full face and neck thermoplastic masks provide standard-of-care immobilization for patients receiving H&N IMRT. However, these masks are uncomfortable and increase skin dose. The purpose of this pilot study was to investigate the feasibility and setup accuracy of open face and neck mask immobilization with OIG. Methods: Ten patients were consented and enrolled to this IRB-approved protocol. Patients were immobilized with open masks securing only forehead and chin. Standard IMRT to 60–70 Gy in 30 fractions were delivered in all cases. Patient simulation information, including isocenter location and CT skin contours, were imported to a commercial OIG system. Onmore » the first day of treatment, patients were initially set up to surface markings and then OIG referenced to face and neck skin regions of interest (ROI) localized on simulation CT images, followed by in-room CBCT. CBCTs were acquired at least weekly while planar OBI was acquired on the days without CBCT. Following 6D robotic couch correction with kV imaging, a new optical real-time surface image was acquired to track intrafraction motion and to serve as a reference surface for setup at the next treatment fraction. Therapists manually recorded total treatment time as well as couch shifts based on kV imaging. Intrafractional ROI motion tracking was automatically recorded. Results: Setup accuracy of OIG was compared with CBCT results. The setup error based on OIG was represented as a 6D shift (vertical/longitudinal/lateral/rotation/pitch/roll). Mean error values were −0.70±3.04mm, −0.69±2.77mm, 0.33±2.67 mm, −0.14±0.94 o, −0.15±1.10o and 0.12±0.82o, respectively for the cohort. Average treatment time was 24.1±9.2 minutes, comparable to standard immobilization. The amplitude of intrafractional ROI motion was 0.69±0.36 mm, driven primarily by respiratory neck motion. Conclusion: OGI can potentially provide accurate setup and treatment tracking for open face and neck immobilization. Study accrual and patient/provider satisfaction survey collection remain ongoing. This study is supported by VisionRT, Ltd.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, G; Qin, A; Zhang, J
Purpose: With the implementation of Cone-beam Computed-Tomography (CBCT) in proton treatment, we introduces a quick and effective tool to verify the patient’s daily setup and geometry changes based on the Water-Equivalent-Thickness Projection-Image(WETPI) from individual beam angle. Methods: A bilateral head neck cancer(HNC) patient previously treated via VMAT was used in this study. The patient received 35 daily CBCT during the whole treatment and there is no significant weight change. The CT numbers of daily CBCTs were corrected by mapping the CT numbers from simulation CT via Deformable Image Registration(DIR). IMPT plan was generated using 4-field IMPT robust optimization (3.5% rangemore » and 3mm setup uncertainties) with beam angle 60, 135, 300, 225 degree. WETPI within CTV through all beam directions were calculated. 3%/3mm gamma index(GI) were used to provide a quantitative comparison between initial sim-CT and mapped daily CBCT. To simulate an extreme case where human error is involved, a couch bar was manually inserted in front of beam angle 225 degree of one CBCT. WETPI was compared in this scenario. Results: The average of GI passing rate of this patient from different beam angles throughout the treatment course is 91.5 ± 8.6. In the cases with low passing rate, it was found that the difference between shoulder and neck angle as well as the head rest often causes major deviation. This indicates that the most challenge in treating HNC is the setup around neck area. In the extreme case where a couch bar is accidently inserted in the beam line, GI passing rate drops to 52 from 95. Conclusion: WETPI and quantitative gamma analysis give clinicians, therapists and physicists a quick feedback of the patient’s setup accuracy or geometry changes. The tool could effectively avoid some human errors. Furthermore, this tool could be used potentially as an initial signal to trigger plan adaptation.« less
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Machiels, Mélanie; Jin, Peng; van Gurp, Christianne H; van Hooft, Jeanin E; Alderliesten, Tanja; Hulshof, Maarten C C M
2018-03-21
To investigate the feasibility and geometric accuracy of carina-based registration for CBCT-guided setup verification in esophageal cancer IGRT, compared with current practice bony anatomy-based registration. Included were 24 esophageal cancer patients with 65 implanted fiducial markers, visible on planning CTs and follow-up CBCTs. All available CBCT scans (n = 236) were rigidly registered to the planning CT with respect to the bony anatomy and the carina. Target coverage was visually inspected and marker position variation was quantified relative to both registration approaches; the variation of systematic (Σ) and random errors (σ) was estimated. Automatic carina-based registration was feasible in 94.9% of the CBCT scans, with an adequate target coverage in 91.1% compared to 100% after bony anatomy-based registration. Overall, Σ (σ) in the LR/CC/AP direction was 2.9(2.4)/4.1(2.4)/2.2(1.8) mm using the bony anatomy registration compared to 3.3(3.0)/3.6(2.6)/3.9(3.1) mm for the carina. Mid-thoracic placed markers showed a non-significant but smaller Σ in CC and AP direction when using the carina-based registration. Compared with a bony anatomy-based registration, carina-based registration for esophageal cancer IGRT results in inadequate target coverage in 8.9% of cases. Furthermore, large Σ and σ, requiring larger anisotropic margins, were seen after carina-based registration. Only for tumors entirely confined to the mid-thoracic region the carina-based registration might be slightly favorable.
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
Automated body weight prediction of dairy cows using 3-dimensional vision.
Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S
2018-05-01
The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Lemaire, E D; Lamontagne, M; Barclay, H W; John, T; Martel, G
1991-01-01
A balance platform setup was defined for use in the determination of the center of gravity in the sagittal plane for a wheelchair and patient. Using the center of gravity information, measurements from the wheelchair and patient (weight, tire coefficients of friction), and various assumptions (constant speed, level-concrete surface, patient-wheelchair system is a rigid body), a method for estimating the rolling resistance for a wheelchair was outlined. The center of gravity and rolling resistance techniques were validated against criterion values (center of gravity error = 1 percent, rolling resistance root mean square error = 0.33 N, rolling resistance Pearson correlation coefficient = 0.995). Consistent results were also obtained from a test dummy and five subjects. Once the center of gravity is known, it is possible to evaluate the stability of a wheelchair (in terms of tipping over) and the interaction between the level of stability and rolling resistance. These quantitative measures are expected to be of use in the setup of wheelchairs with a variable seat angle and variable wheelbase length or when making comparisons between different wheelchairs.
Korucu, M Kemal; Kaplan, Özgür; Büyük, Osman; Güllü, M Kemal
2016-10-01
In this study, we investigate the usability of sound recognition for source separation of packaging wastes in reverse vending machines (RVMs). For this purpose, an experimental setup equipped with a sound recording mechanism was prepared. Packaging waste sounds generated by three physical impacts such as free falling, pneumatic hitting and hydraulic crushing were separately recorded using two different microphones. To classify the waste types and sizes based on sound features of the wastes, a support vector machine (SVM) and a hidden Markov model (HMM) based sound classification systems were developed. In the basic experimental setup in which only free falling impact type was considered, SVM and HMM systems provided 100% classification accuracy for both microphones. In the expanded experimental setup which includes all three impact types, material type classification accuracies were 96.5% for dynamic microphone and 97.7% for condenser microphone. When both the material type and the size of the wastes were classified, the accuracy was 88.6% for the microphones. The modeling studies indicated that hydraulic crushing impact type recordings were very noisy for an effective sound recognition application. In the detailed analysis of the recognition errors, it was observed that most of the errors occurred in the hitting impact type. According to the experimental results, it can be said that the proposed novel approach for the separation of packaging wastes could provide a high classification performance for RVMs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
Generation of Rayleigh waves into mortar and concrete samples.
Piwakowski, B; Fnine, Abdelilah; Goueygou, M; Buyle-Bodin, F
2004-04-01
The paper deals with a non-destructive method for characterizing the degraded cover of concrete structures using high-frequency ultrasound. In a preliminary study, the authors emphasized on the interest of using higher frequency Rayleigh waves (within the 0.2-1 MHz frequency band) for on-site inspection of concrete structures with subsurface damage. The present study represents a continuation of the previous work and aims at optimizing the generation and reception of Rayleigh waves into mortar and concrete be means of wedge transducers. This is performed experimentally by checking the influence of the wedge material and coupling agent on the surface wave parameters. The selection of the best combination wedge/coupling is performed by searching separately for the best wedge material and the best coupling material. Three wedge materials and five coupling agents were tested. For each setup the five parameters obtained from the surface wave measurement i.e. the frequency band, the maximal available central frequency, the group velocity error and its standard deviation and finally the error in velocity dispersion characteristic were investigated and classed as a function of the wedge material and the coupling agent. The selection criteria were chosen so as to minimize the absorption of both materials, the randomness of measurements and the systematic error of the group velocity and of dispersion characteristic. Among the three tested wedge materials, Teflon was found to be the best. The investigation on the coupling agent shows that the gel type materials are the best solutions. The "thick" materials displaying higher viscosity were found as the worst. The results show also that the use of a thin plastic film combined with the coupling agent even increases the bandwidth and decreases the uncertainty of measurements.
Deveau, Michael A; Gutiérrez, Alonso N; Mackie, Thomas R; Tomé, Wolfgang A; Forrest, Lisa J
2010-01-01
Intensity-modulated radiation therapy (IMRT) can be employed to yield precise dose distributions that tightly conform to targets and reduce high doses to normal structures by generating steep dose gradients. Because of these sharp gradients, daily setup variations may have an adverse effect on clinical outcome such that an adjacent normal structure may be overdosed and/or the target may be underdosed. This study provides a detailed analysis of the impact of daily setup variations on optimized IMRT canine nasal tumor treatment plans when variations are not accounted for due to the lack of image guidance. Setup histories of ten patients with nasal tumors previously treated using helical tomotherapy were replanned retrospectively to study the impact of daily setup variations on IMRT dose distributions. Daily setup shifts were applied to IMRT plans on a fraction-by-fraction basis. Using mattress immobilization and laser alignment, mean setup error magnitude in any single dimension was at least 2.5 mm (0-10.0 mm). With inclusions of all three translational coordinates, mean composite offset vector was 5.9 +/- 3.3 mm. Due to variations, a loss of equivalent uniform dose for target volumes of up to 5.6% was noted which corresponded to a potential loss in tumor control probability of 39.5%. Overdosing of eyes and brain was noted by increases in mean normalized total dose and highest normalized dose given to 2% of the volume. Findings suggest that successful implementation of canine nasal IMRT requires daily image guidance to ensure accurate delivery of precise IMRT distributions when non-rigid immobilization techniques are utilized. Unrecognized geographical misses may result in tumor recurrence and/or radiation toxicities to the eyes and brain.
Deveau, Michael A.; Gutiérrez, Alonso N.; Mackie, Thomas R.; Tomé, Wolfgang A.; Forrest, Lisa J.
2009-01-01
Intensity-modulated radiation therapy (IMRT) can be employed to yield precise dose distributions that tightly conform to targets and reduce high doses to normal structures by generating steep dose gradients. Because of these sharp gradients, daily setup variations may have an adverse effect on clinical outcome such that an adjacent normal structure may be overdosed and/or the target may be underdosed. This study provides a detailed analysis of the impact of daily setup variations on optimized IMRT canine nasal tumor treatment plans when variations are not accounted for due to the lack of image guidance. Setup histories of ten patients with nasal tumors previously treated using helical tomotherapy were replanned retrospectively to study the impact of daily setup variations on IMRT dose distributions. Daily setup shifts were applied to IMRT plans on a fraction-by-fraction basis. Using mattress immobilization and laser alignment, mean setup error magnitude in any single dimension was at least 2.5mm (0-10.0mm). With inclusions of all three translational coordinates, mean composite offset vector was 5.9±3.3mm. Due to variations, a loss of equivalent uniform dose (EUD) for target volumes of up to 5.6% was noted which corresponded to a potential loss in TCP of 39.5%. Overdosing of eyes and brain was noted by increases in mean normalized total dose (NTDmean) and highest normalized dose given to 2% of the volume (NTD2%). Findings suggest that successful implementation of canine nasal IMRT requires daily image guidance to ensure accurate delivery of precise IMRT distributions when non-rigid immobilization techniques are utilized. Unrecognized geographical misses may result in tumor recurrence and/or radiation toxicities to the eyes and brain. PMID:20166402
Yue, Ning J; Goyal, Sharad; Kim, Leonard H; Khan, Atif; Haffty, Bruce G
2014-01-01
This study investigated the patterns of intrafractional motion and accuracy of treatment setup strategies in 3-dimensional conformal radiation therapy of accelerated partial breast irradiation (APBI) for right- and left-sided breast cancers. Sixteen right-sided and 17 left-sided breast cancer patients were enrolled in an institutional APBI trial in which gold fiducial markers were strategically sutured to the surgical cavity walls. Daily pre- and postradiation therapy kV imaging were performed and were matched to digitally reconstructed radiographs based on bony anatomy and fiducial markers, respectively, to determine the intrafractional motion. The positioning differences of the laser-tattoo and the bony anatomy-based setups with respect to the marker-based setup (benchmark) were determined to evaluate their accuracy. Statistical differences were found between the right- and left-sided APBI treatments in vector directions of intrafractional motion and treatment setup errors in the reference systems, but less in their overall magnitudes. The directional difference was more pronounced in the lateral direction. It was found that the intrafractional motion and setup reference systems tended to deviate in the right direction for the right-sided breast treatments and in the left direction for the left-sided breast treatments. It appears that the fiducial markers placed in the seroma cavity exhibit side dependent directional intrafractional motion, although additional data may be needed to further validate the conclusion. The bony anatomy-based treatment setup improves the accuracy over laser-tattoo. But it is inadequate to rely on bony anatomy to assess intrafractional target motion in both magnitude and direction. Copyright © 2014 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
Fottrell, Edward; Byass, Peter; Berhane, Yemane
2008-03-25
As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, J; University of Sydney, Sydney, NSW; Sykes, J
Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, D; Chen, J; Hao, Y
Purpose: This work employs the retraction method to compute and evaluate the margin from CTV to PTV based on the influence of target dosimetry of setup errors during cervical carcinoma patients treatment. Methods: Sixteen patients with cervical cancer were treated by Elekta synergy and received a total of 305 KV-CBCT images. The iso-center of the initial plans were changed according to the setup errors to simulate radiotherapy and then recalculated the dose distribution using leaf sequences and MUs for individual plans. The margin from CTV to PTV will be concluded both by the method of retracting (Fixed the PTV ofmore » the original plan, and retract PTV a certain distance defined as simulative organization CTVnx. The minimum distance value from PTV to CTVnx which get specified doses, namely guarantee at least 99% CTV volume can receive the dose of 95%, is the margin CTV to PTV we found) and the former formula method. Results: (1)The setup errors of 16 patients in X, Y and Z directions were(1.13±2.94) mm,(−1.63±7.13) mm,(−0.65±2.25) mm. (2) The distance between CTVx and PTV was 5, 9 and 3mm in X, Y and Z directions According to 2.5+0.7σ. (3) Transplantation plans displayed 99% of CTVx10- CTVx7 and received 95% of prescription dose, but CTVx6- CTVx3 departed from standard of clinic.In order to protect normal tissues, we selected 7mm as the minimum value of the margin from CTV to PTV. Conclusion: We have test an retraction method for the margin from CTV to PTV evaluation. The retraction method is more reliable than the formula method for calculating the margin from the CTV to the PTV, because it represented practice of treatment, and increasing a new method in this field.« less
Kenney, Terry A.
2010-01-01
Operational procedures at U.S. Geological Survey gaging stations include periodic leveling checks to ensure that gages are accurately set to the established gage datum. Differential leveling techniques are used to determine elevations for reference marks, reference points, all gages, and the water surface. The techniques presented in this manual provide guidance on instruments and methods that ensure gaging-station levels are run to both a high precision and accuracy. Levels are run at gaging stations whenever differences in gage readings are unresolved, stations may have been damaged, or according to a pre-determined frequency. Engineer's levels, both optical levels and electronic digital levels, are commonly used for gaging-station levels. Collimation tests should be run at least once a week for any week that levels are run, and the absolute value of the collimation error cannot exceed 0.003 foot/100 feet (ft). An acceptable set of gaging-station levels consists of a minimum of two foresights, each from a different instrument height, taken on at least two independent reference marks, all reference points, all gages, and the water surface. The initial instrument height is determined from another independent reference mark, known as the origin, or base reference mark. The absolute value of the closure error of a leveling circuit must be less than or equal to ft, where n is the total number of instrument setups, and may not exceed |0.015| ft regardless of the number of instrument setups. Closure error for a leveling circuit is distributed by instrument setup and adjusted elevations are determined. Side shots in a level circuit are assessed by examining the differences between the adjusted first and second elevations for each objective point in the circuit. The absolute value of these differences must be less than or equal to 0.005 ft. Final elevations for objective points are determined by averaging the valid adjusted first and second elevations. If final elevations indicate that the reference gage is off by |0.015| ft or more, it must be reset.
NASA Astrophysics Data System (ADS)
Zou, Guang'an; Wang, Qiang; Mu, Mu
2016-09-01
Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Simulation of the Effects of Random Measurement Errors
ERIC Educational Resources Information Center
Kinsella, I. A.; Hannaidh, P. B. O.
1978-01-01
Describes a simulation method for measurement of errors that requires calculators and tables of random digits. Each student simulates the random behaviour of the component variables in the function and by combining the results of all students, the outline of the sampling distribution of the function can be obtained. (GA)
Development of multiple-eye PIV using mirror array
NASA Astrophysics Data System (ADS)
Maekawa, Akiyoshi; Sakakibara, Jun
2018-06-01
In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .
Tagaste, Barbara; Riboldi, Marco; Spadea, Maria F; Bellante, Simone; Baroni, Guido; Cambria, Raffaella; Garibaldi, Cristina; Ciocca, Mario; Catalano, Gianpiero; Alterio, Daniela; Orecchia, Roberto
2012-04-01
To compare infrared (IR) optical vs. stereoscopic X-ray technologies for patient setup in image-guided stereotactic radiotherapy. Retrospective data analysis of 233 fractions in 127 patients treated with hypofractionated stereotactic radiotherapy was performed. Patient setup at the linear accelerator was carried out by means of combined IR optical localization and stereoscopic X-ray image fusion in 6 degrees of freedom (6D). Data were analyzed to evaluate the geometric and dosimetric discrepancy between the two patient setup strategies. Differences between IR optical localization and 6D X-ray image fusion parameters were on average within the expected localization accuracy, as limited by CT image resolution (3 mm). A disagreement between the two systems below 1 mm in all directions was measured in patients treated for cranial tumors. In extracranial sites, larger discrepancies and higher variability were observed as a function of the initial patient alignment. The compensation of IR-detected rotational errors resulted in a significantly improved agreement with 6D X-ray image fusion. On the basis of the bony anatomy registrations, the measured differences were found not to be sensitive to patient breathing. The related dosimetric analysis showed that IR-based patient setup caused limited variations in three cases, with 7% maximum dose reduction in the clinical target volume and no dose increase in organs at risk. In conclusion, patient setup driven by IR external surrogates localization in 6D featured comparable accuracy with respect to procedures based on stereoscopic X-ray imaging. Copyright © 2012 Elsevier Inc. All rights reserved.
Automated patient setup and gating using cone beam computed tomography projections
NASA Astrophysics Data System (ADS)
Wan, Hanlin; Bertholet, Jenny; Ge, Jiajia; Poulsen, Per; Parikh, Parag
2016-03-01
In radiation therapy, fiducial markers are often implanted near tumors and used for patient positioning and respiratory gating purposes. These markers are then used to manually align the patients by matching the markers in the cone beam computed tomography (CBCT) reconstruction to those in the planning CT. This step is time-intensive and user-dependent, and often results in a suboptimal patient setup. We propose a fully automated, robust method based on dynamic programming (DP) for segmenting radiopaque fiducial markers in CBCT projection images, which are then used to automatically optimize the treatment couch position and/or gating window bounds. The mean of the absolute 2D segmentation error of our DP algorithm is 1.3+/- 1.0 mm for 87 markers on 39 patients. Intrafraction images were acquired every 3 s during treatment at two different institutions. For gated patients from Institution A (8 patients, 40 fractions), the DP algorithm increased the delivery accuracy (96+/- 6% versus 91+/- 11% , p < 0.01) compared to the manual setup using kV fluoroscopy. For non-gated patients from Institution B (6 patients, 16 fractions), the DP algorithm performed similarly (1.5+/- 0.8 mm versus 1.6+/- 0.9 mm, p = 0.48) compared to the manual setup matching the fiducial markers in the CBCT to the mean position. Our proposed automated patient setup algorithm only takes 1-2 s to run, requires no user intervention, and performs as well as or better than the current clinical setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Shinichiro, E-mail: shinshin@nirs.go.jp; Karube, Masataka; Shirai, Toshiyuki
Purpose: Having implemented amplitude-based respiratory gating for scanned carbon-ion beam therapy, we sought to evaluate its effect on positional accuracy and throughput. Methods and Materials: A total of 10 patients with tumors of the lung and liver participated in the first clinical trials at our center. Treatment planning was conducted with 4-dimensional computed tomography (4DCT) under free-breathing conditions. The planning target volume (PTV) was calculated by adding a 2- to 3-mm setup margin outside the clinical target volume (CTV) within the gating window. The treatment beam was on when the CTV was within the PTV. Tumor position was detected inmore » real time with a markerless tumor tracking system using paired x-ray fluoroscopic imaging units. Results: The patient setup error (mean ± SD) was 1.1 ± 1.2 mm/0.6 ± 0.4°. The mean internal gating accuracy (95% confidence interval [CI]) was 0.5 mm. If external gating had been applied to this treatment, the mean gating accuracy (95% CI) would have been 4.1 mm. The fluoroscopic radiation doses (mean ± SD) were 23.7 ± 21.8 mGy per beam and less than 487.5 mGy total throughout the treatment course. The setup, preparation, and irradiation times (mean ± SD) were 8.9 ± 8.2 min, 9.5 ± 4.6 min, and 4.0 ± 2.4 min, respectively. The treatment room occupation time was 36.7 ± 67.5 min. Conclusions: Internal gating had a much higher accuracy than external gating. By the addition of a setup margin of 2 to 3 mm, internal gating positional error was less than 2.2 mm at 95% CI.« less
NASA Astrophysics Data System (ADS)
Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.
2011-10-01
Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.
Isospin Breaking Corrections to the HVP with Domain Wall Fermions
NASA Astrophysics Data System (ADS)
Boyle, Peter; Guelpers, Vera; Harrison, James; Juettner, Andreas; Lehner, Christoph; Portelli, Antonin; Sachrajda, Christopher
2018-03-01
We present results for the QED and strong isospin breaking corrections to the hadronic vacuum polarization using Nf = 2 + 1 Domain Wall fermions. QED is included in an electro-quenched setup using two different methods, a stochastic and a perturbative approach. Results and statistical errors from both methods are directly compared with each other.
At least some errors are randomly generated (Freud was wrong)
NASA Technical Reports Server (NTRS)
Sellen, A. J.; Senders, J. W.
1986-01-01
An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.
Quantum cryptography using entangled photons in energy-time bell states
Tittel; Brendel; Zbinden; Gisin
2000-05-15
We present a setup for quantum cryptography based on photon pairs in energy-time Bell states and show its feasibility in a laboratory experiment. Our scheme combines the advantages of using photon pairs instead of faint laser pulses and the possibility to preserve energy-time entanglement over long distances. Moreover, using four-dimensional energy-time states, no fast random change of bases is required in our setup: Nature itself decides whether to measure in the energy or in the time base, thus rendering eavesdropper attacks based on "photon number splitting" less efficient.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
To image analysis in computed tomography
NASA Astrophysics Data System (ADS)
Chukalina, Marina; Nikolaev, Dmitry; Ingacheva, Anastasia; Buzmakov, Alexey; Yakimchuk, Ivan; Asadchikov, Victor
2017-03-01
The presence of errors in tomographic image may lead to misdiagnosis when computed tomography (CT) is used in medicine, or the wrong decision about parameters of technological processes when CT is used in the industrial applications. Two main reasons produce these errors. First, the errors occur on the step corresponding to the measurement, e.g. incorrect calibration and estimation of geometric parameters of the set-up. The second reason is the nature of the tomography reconstruction step. At the stage a mathematical model to calculate the projection data is created. Applied optimization and regularization methods along with their numerical implementations of the method chosen have their own specific errors. Nowadays, a lot of research teams try to analyze these errors and construct the relations between error sources. In this paper, we do not analyze the nature of the final error, but present a new approach for the calculation of its distribution in the reconstructed volume. We hope that the visualization of the error distribution will allow experts to clarify the medical report impression or expert summary given by them after analyzing of CT results. To illustrate the efficiency of the proposed approach we present both the simulation and real data processing results.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teboh, Forbang R; Agee, M; Rowe, L
2014-06-01
Purpose: Immobilization devices combine rigid patient fixation as well as comfort and play a key role providing the stability required for accurate radiation delivery. In the setup step, couch re-positioning needed to align the patient is derived via registration of acquired versus reference image. For subsequent fractions, replicating the initial setup should yield identical alignment errors when compared to the reference. This is not always the case and further couch re-positioning can be needed. An important quality assurance measure is to set couch tolerances beyond which additional investigations are needed. The purpose of this work was to study the inter-fractionmore » couch changes needed to re-align the patient and the intra-fraction stability of the alignment as a guide to establish the couch tolerances. Methods: Data from twelve patients treated on the Accuray CyberKnife (CK) system for fractionated intracranial radiotherapy and immobilized with Aquaplast RT, U-frame, F-Head-Support (Qfix, PA, USA) was used. Each fraction involved image acquisitions and registration with the reference to re-align the patient. The absolute couch position corresponding to the approved setup alignment was recorded per fraction. Intra-fraction set-up corrections were recorded throughout the treatment. Results: The average approved setup alignment was 0.03±0.28mm, 0.15±0.22mm, 0.06±0.31mm in the L/R, A/P, S/I directions respectively and 0.00±0.35degrees, 0.03±0.32degrees, 0.08±0.45degrees for roll, pitch and yaw respectively. The inter-fraction reproducibility of the couch position was 6.65mm, 10.55mm, and 4.77mm in the L/R, A/P and S/I directions respectively and 0.82degrees, 0.71degrees for roll and pitch respectively. Intra-fraction monitoring showed small average errors of 0.21±0.21mm, 0.00±0.08mm, 0.23±0.22mm in the L/R, A/P, S/I directions respectively and 0.03±0.12degrees, 0.04±0.25degrees, and 0.13±0.15degrees in the roll, pitch and yaw respectively. Conclusion: The inter-fraction reproducibility should serve as a guide to couch tolerances, specific to a site and immobilization. More patients need to be included to make general conclusions.« less
Modeling and Implementation of Multi-Position Non-Continuous Rotation Gyroscope North Finder.
Luo, Jun; Wang, Zhiqian; Shen, Chengwu; Kuijper, Arjan; Wen, Zhuoman; Liu, Shaojin
2016-09-20
Even when the Global Positioning System (GPS) signal is blocked, a rate gyroscope (gyro) north finder is capable of providing the required azimuth reference information to a certain extent. In order to measure the azimuth between the observer and the north direction very accurately, we propose a multi-position non-continuous rotation gyro north finding scheme. Our new generalized mathematical model analyzes the elements that affect the azimuth measurement precision and can thus provide high precision azimuth reference information. Based on the gyro's principle of detecting a projection of the earth rotation rate on its sensitive axis and the proposed north finding scheme, we are able to deduct an accurate mathematical model of the gyro outputs against azimuth with the gyro and shaft misalignments. Combining the gyro outputs model and the theory of propagation of uncertainty, some approaches to optimize north finding are provided, including reducing the gyro bias error, constraining the gyro random error, increasing the number of rotation points, improving rotation angle measurement precision, decreasing the gyro and the shaft misalignment angles. According them, a north finder setup is built and the azimuth uncertainty of 18" is obtained. This paper provides systematic theory for analyzing the details of the gyro north finder scheme from simulation to implementation. The proposed theory can guide both applied researchers in academia and advanced practitioners in industry for designing high precision robust north finder based on different types of rate gyroscopes.
NASA Astrophysics Data System (ADS)
Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina
2012-03-01
Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.
Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392
Experimental nonlocality-based randomness generation with nonprojective measurements
NASA Astrophysics Data System (ADS)
Gómez, S.; Mattar, A.; Gómez, E. S.; Cavalcanti, D.; Farías, O. Jiménez; Acín, A.; Lima, G.
2018-04-01
We report on an optical setup generating more than one bit of randomness from one entangled bit (i.e., a maximally entangled state of two qubits). The amount of randomness is certified through the observation of Bell nonlocal correlations. To attain this result we implemented a high-purity entanglement source and a nonprojective three-outcome measurement. Our implementation achieves a gain of 27% of randomness as compared with the standard methods using projective measurements. Additionally, we estimate the amount of randomness certified in a one-sided device-independent scenario, through the observation of Einstein-Podolsky-Rosen steering. Our results prove that nonprojective quantum measurements allow extending the limits for nonlocality-based certified randomness generation using current technology.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
ERIC Educational Resources Information Center
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
Trofimov, Alexei; Unkelbach, Jan; DeLaney, Thomas F; Bortfeld, Thomas
2012-01-01
Dose-volume histograms (DVH) are the most common tool used in the appraisal of the quality of a clinical treatment plan. However, when delivery uncertainties are present, the DVH may not always accurately describe the dose distribution actually delivered to the patient. We present a method, based on DVH formalism, to visualize the variability in the expected dosimetric outcome of a treatment plan. For a case of chordoma of the cervical spine, we compared 2 intensity modulated proton therapy plans. Treatment plan A was optimized based on dosimetric objectives alone (ie, desired target coverage, normal tissue tolerance). Plan B was created employing a published probabilistic optimization method that considered the uncertainties in patient setup and proton range in tissue. Dose distributions and DVH for both plans were calculated for the nominal delivery scenario, as well as for scenarios representing deviations from the nominal setup, and a systematic error in the estimate of range in tissue. The histograms from various scenarios were combined to create DVH bands to illustrate possible deviations from the nominal plan for the expected magnitude of setup and range errors. In the nominal scenario, the DVH from plan A showed superior dose coverage, higher dose homogeneity within the target, and improved sparing of the adjacent critical structure. However, when the dose distributions and DVH from plans A and B were recalculated for different error scenarios (eg, proton range underestimation by 3 mm), the plan quality, reflected by DVH, deteriorated significantly for plan A, while plan B was only minimally affected. In the DVH-band representation, plan A produced wider bands, reflecting its higher vulnerability to delivery errors, and uncertainty in the dosimetric outcome. The results illustrate that comparison of DVH for the nominal scenario alone does not provide any information about the relative sensitivity of dosimetric outcome to delivery uncertainties. Thus, such comparison may be misleading and may result in the selection of an inferior plan for delivery to a patient. A better-informed decision can be made if additional information about possible dosimetric variability is presented; for example, in the form of DVH bands. Copyright © 2012 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
A Robust and Affordable Table Indexing Approach for Multi-isocenter Dosimetrically Matched Fields.
Yu, Amy; Fahimian, Benjamin; Million, Lynn; Hsu, Annie
2017-05-23
Purpose Radiotherapy treatment planning of extended volume typically necessitates the utilization of multiple field isocenters and abutting dosimetrically matched fields in order to enable coverage beyond the field size limits. A common example includes total lymphoid irradiation (TLI) treatments, which are conventionally planned using dosimetric matching of the mantle, para-aortic/spleen, and pelvic fields. Due to the large irradiated volume and system limitations, such as field size and couch extension, a combination of couch shifts and sliding of patients are necessary to be correctly executed for accurate delivery of the plan. However, shifting of patients presents a substantial safety issue and has been shown to be prone to errors ranging from minor deviations to geometrical misses warranting a medical event. To address this complex setup and mitigate the safety issues relating to delivery, a practical technique for couch indexing of TLI treatments has been developed and evaluated through a retrospective analysis of couch position. Methods The indexing technique is based on the modification of the commonly available slide board to enable indexing of the patient position. Modifications include notching to enable coupling with indexing bars, and the addition of a headrest used to fixate the head of the patient relative to the slide board. For the clinical setup, a Varian Exact Couch TM (Varian Medical Systems, Inc, Palo Alto, CA) was utilized. Two groups of patients were treated: 20 patients with table indexing and 10 patients without. The standard deviations (SDs) of the couch positions in longitudinal, lateral, and vertical directions through the entire treatment cycle for each patient were calculated and differences in both groups were analyzed with Student's t-test. Results The longitudinal direction showed the largest improvement. In the non-indexed group, the positioning SD ranged from 2.0 to 7.9 cm. With the indexing device, the positioning SD was reduced to a range of 0.4 to 1.3 cm (p < 0.05 with 95% confidence level). The lateral positioning was slightly improved (p < 0.05 with 95% confidence level), while no improvement was observed in the vertical direction. Conclusions The conventional matched field TLI treatment is error-prone to geometrical setup error. The feasibility of full indexing TLI treatments was validated and shown to result in a significant reduction of positioning and shifting errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Suh, T; Park, S
2015-06-15
Purpose: The dose-related effects of patient setup errors on biophysical indices were evaluated for conventional wedge (CW) and field-in-field (FIF) whole breast irradiation techniques. Methods: The treatment plans for 10 patients receiving whole left breast irradiation were retrospectively selected. Radiobiological and physical effects caused by dose variations were evaluated by shifting the isocenters and gantry angles of the treatment plans. Dose-volume histograms of the planning target volume (PTV), heart, and lungs were generated, and conformity index (CI), homogeneity index (HI), tumor control probability (TCP), and normal tissue complication probability (NTCP) were determined. Results: For “isocenter shift plan” with posterior direction,more » the D95 of the PTV decreased by approximately 15% and the TCP of the PTV decreased by approximately 50% for the FIF technique and by 40% for the CW; however, the NTCPs of the lungs and heart increased by about 13% and 1%, respectively, for both techniques. Increasing the gantry angle decreased the TCPs of the PTV by 24.4% (CW) and by 34% (FIF). The NTCPs for the two techniques differed by only 3%. In case of CW, the CIs and HIs were much higher than that of the FIF in all cases. It had a significant difference between two techniques (p<0.01). According to our results, however, the FIF had more sensitive response by set up errors rather than CW in bio-physical aspects. Conclusions: The radiobiological-based analysis can detect significant dosimetric errors then, can provide a practical patient quality assurance method to guide the radiobiological and physical effects.« less
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Generation of a tunable environment for electrical oscillator systems.
León-Montiel, R de J; Svozilík, J; Torres, Juan P
2014-07-01
Many physical, chemical, and biological systems can be modeled by means of random-frequency harmonic oscillator systems. Even though the noise-free evolution of harmonic oscillator systems can be easily implemented, the way to experimentally introduce, and control, noise effects due to a surrounding environment remains a subject of lively interest. Here, we experimentally demonstrate a setup that provides a unique tool to generate a fully tunable environment for classical electrical oscillator systems. We illustrate the operation of the setup by implementing the case of a damped random-frequency harmonic oscillator. The high degree of tunability and control of our scheme is demonstrated by gradually modifying the statistics of the oscillator's frequency fluctuations. This tunable system can readily be used to experimentally study interesting noise effects, such as noise-induced transitions in systems driven by multiplicative noise, and noise-induced transport, a phenomenon that takes place in quantum and classical coupled oscillator networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamezawa, H; Fujimoto General Hospital, Miyakonojo, Miyazaki; Arimura, H
Purpose: To investigate the possibility of exposure dose reduction of the cone-beam computed tomography (CBCT) in an image guided patient positioning system by using 6 noise suppression filters. Methods: First, a reference dose (RD) and low-dose (LD)-CBCT (X-ray volume imaging system, Elekta Co.) images were acquired with a reference dose of 86.2 mGy (weighted CT dose index: CTDIw) and various low doses of 1.4 to 43.1 mGy, respectively. Second, an automated rigid registration for three axes was performed for estimating setup errors between a planning CT image and the LD-CBCT images, which were processed by 6 noise suppression filters, i.e.,more » averaging filter (AF), median filter (MF), Gaussian filter (GF), bilateral filter (BF), edge preserving smoothing filter (EPF) and adaptive partial median filter (AMF). Third, residual errors representing the patient positioning accuracy were calculated as an Euclidean distance between the setup error vectors estimated using the LD-CBCT image and RD-CBCT image. Finally, the relationships between the residual error and CTDIw were obtained for 6 noise suppression filters, and then the CTDIw for LD-CBCT images processed by the noise suppression filters were measured at the same residual error, which was obtained with the RD-CBCT. This approach was applied to an anthropomorphic pelvic phantom and two cancer patients. Results: For the phantom, the exposure dose could be reduced from 61% (GF) to 78% (AMF) by applying the noise suppression filters to the CBCT images. The exposure dose in a prostate cancer case could be reduced from 8% (AF) to 61% (AMF), and the exposure dose in a lung cancer case could be reduced from 9% (AF) to 37% (AMF). Conclusion: Using noise suppression filters, particularly an adaptive partial median filter, could be feasible to decrease the additional exposure dose to patients in image guided patient positioning systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayler, E; Harrison, A; Eldredge-Hindy, H
Purpose: and Leipzig applicators (VLAs) are single-channel brachytherapy surface applicators used to treat skin lesions up to 2cm diameter. Source dwell times can be calculated and entered manually after clinical set-up or ultrasound. This procedure differs dramatically from CT-based planning; the novelty and unfamiliarity could lead to severe errors. To build layers of safety and ensure quality, a multidisciplinary team created a protocol and applied Failure Modes and Effects Analysis (FMEA) to the clinical procedure for HDR VLA skin treatments. Methods: team including physicists, physicians, nurses, therapists, residents, and administration developed a clinical procedure for VLA treatment. The procedure wasmore » evaluated using FMEA. Failure modes were identified and scored by severity, occurrence, and detection. The clinical procedure was revised to address high-scoring process nodes. Results: Several key components were added to the clinical procedure to minimize risk probability numbers (RPN): -Treatments are reviewed at weekly QA rounds, where physicians discuss diagnosis, prescription, applicator selection, and set-up. Peer review reduces the likelihood of an inappropriate treatment regime. -A template for HDR skin treatments was established in the clinical EMR system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planning physicist, and increases the detectability of an error during the physics second check. -A screen check was implemented during the second check to increase detectability of an error. -To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display. This facilitates data entry and verification. -VLAs are color-coded and labeled to match the EMR prescriptions, which simplifies in-room selection and verification. Conclusion: Multidisciplinary planning and FMEA increased delectability and reduced error probability during VLA HDR Brachytherapy. This clinical model may be useful to institutions implementing similar procedures.« less
Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors
NASA Astrophysics Data System (ADS)
Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.
2015-02-01
Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.
Roshan, Noor-Mohammed; Sakeenabi, Basha
2011-11-01
The objectives of this clinical study were to: evaluate the survival of occlusal atraumatic restorative treatment (ART) restorations, on a longitudinal basis, in the primary molars of children; and compare the success rate of ART restorations placed in school environment and in hospital dental setup. One dentist placed 120 ART restorations in 60 five- to seven year-olds who had bilateral matched pairs of carious primary molars. A split-mouth design was used to place restorations in school and in hospital dental setup, which were assigned randomly to contralateral sides. Restorations were evaluated after 6 and 12 months using the ART criteria. The survival rate of ART restorations placed in school environment was 82.2% at the 6-month assessment and 77.77% at the 12-month assessment. The success rates of ART restorations placed in hospital dental setup in the 2 assessments were 87.7% and 81.48%, respectively. There was no statistically significant difference between the ART restorations placed in school environment and hospital dental setup in both assessments (P > O.05). The main cause of failure was the loss of restoration. The one year success rate of occlusal ART restorations in primary molars was moderately successful. The ART technique's done in hospital dental setup was not proven to be better than restorations placed in school environment.
Particle Tracking on the BNL Relativistic Heavy Ion Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell, G. F.
1986-08-07
Tracking studies including the effects of random multipole errors as well as the effects of random and systematic multipole errors have been made for RHIC. Initial results for operating at an off diagonal working point are discussed.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
Measurement-device-independent quantum key distribution for Scarani-Acin-Ribordy-Gisin 04 protocol
Mizutani, Akihiro; Tamaki, Kiyoshi; Ikuta, Rikizo; Yamamoto, Takashi; Imoto, Nobuyuki
2014-01-01
The measurement-device-independent quantum key distribution (MDI QKD) was proposed to make BB84 completely free from any side-channel in detectors. Like in prepare & measure QKD, the use of other protocols in MDI setting would be advantageous in some practical situations. In this paper, we consider SARG04 protocol in MDI setting. The prepare & measure SARG04 is proven to be able to generate a key up to two-photon emission events. In MDI setting we show that the key generation is possible from the event with single or two-photon emission by a party and single-photon emission by the other party, but the two-photon emission event by both parties cannot contribute to the key generation. On the contrary to prepare & measure SARG04 protocol where the experimental setup is exactly the same as BB84, the measurement setup for SARG04 in MDI setting cannot be the same as that for BB84 since the measurement setup for BB84 in MDI setting induces too many bit errors. To overcome this problem, we propose two alternative experimental setups, and we simulate the resulting key rate. Our study highlights the requirements that MDI QKD poses on us regarding with the implementation of a variety of QKD protocols. PMID:24913431
NASA Astrophysics Data System (ADS)
Sandri, P.
2017-12-01
The paper describes the alignment technique developed for the wavefront error measurement of ellipsoidal mirrors presenting a central hole. The achievement of a good alignment with a classic setup at the finite conjugates when mirrors are uncoated cannot be based on the identification and materialization at naked eye of the retro-reflected spot by the mirror under test as the intensity of the retro-reflected spot results to be ≈1E-3 of the intensity of the injected laser beam of the interferometer. We present the technique developed for the achievement of an accurate alignment in the setup at the finite conjugate even in condition of low intensity based on the use of an autocollimator adjustable in focus position and a small polished flat surface on the rear side of the mirror. The technique for the alignment has successfully been used for the optical test of the concave ellipsoidal mirrors of the METIS coronagraph of the ESA Solar Orbiter mission. The presented method results to be advantageous in terms of precision and of time saving also when the mirrors are reflective coated and integrated into their mechanical hardware.
Is ExacTrac x-ray system an alternative to CBCT for positioning patients with head and neck cancers?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemente, Stefania; Chiumento, Costanza; Fiorentino, Alba
Purpose: To evaluate the usefulness of a six-degrees-of freedom (6D) correction using ExacTrac robotics system in patients with head-and-neck (HN) cancer receiving radiation therapy.Methods: Local setup accuracy was analyzed for 12 patients undergoing intensity-modulated radiation therapy (IMRT). Patient position was imaged daily upon two different protocols, cone-beam computed tomography (CBCT), and ExacTrac (ET) images correction. Setup data from either approach were compared in terms of both residual errors after correction and punctual displacement of selected regions of interest (Mandible, C2, and C6 vertebral bodies).Results: On average, both protocols achieved reasonably low residual errors after initial correction. The observed differences inmore » shift vectors between the two protocols showed that CBCT tends to weight more C2 and C6 at the expense of the mandible, while ET tends to average more differences among the different ROIs.Conclusions: CBCT, even without 6D correction capabilities, seems preferable to ET for better consistent alignment and the capability to see soft tissues. Therefore, in our experience, CBCT represents a benchmark for positioning head and neck cancer patients.« less
Short-Range Six-Axis Interferometer Controlled Positioning for Scanning Probe Microscopy
Lazar, Josef; Klapetek, Petr; Valtr, Miroslav; Hrabina, Jan; Buchta, Zdenek; Cip, Onrej; Cizek, Martin; Oulehla, Jindrich; Sery, Mojmir
2014-01-01
We present a design of a nanometrology measuring setup which is a part of the national standard instrumentation for nanometrology operated by the Czech Metrology Institute (CMI) in Brno, Czech Republic. The system employs a full six-axis interferometric position measurement of the sample holder consisting of six independent interferometers. Here we report on description of alignment issues and accurate adjustment of orthogonality of the measuring axes. Consequently, suppression of cosine errors and reduction of sensitivity to Abbe offset is achieved through full control in all six degrees of freedom. Due to the geometric configuration including a wide basis of the two units measuring in y-direction and the three measuring in z-direction the angle resolution of the whole setup is minimize to tens of nanoradians. Moreover, the servo-control of all six degrees of freedom allows to keep guidance errors below 100 nrad. This small range system is based on a commercial nanopositioning stage driven by piezoelectric transducers with the range (200 × 200 × 10) μm. Thermally compensated miniature interferometric units with fiber-optic light delivery and integrated homodyne detection system were developed especially for this system and serve as sensors for othogonality alignment. PMID:24451463
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias
2014-11-01
Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.
A framework for multi-criteria assessment of model enhancements
NASA Astrophysics Data System (ADS)
Francke, Till; Foerster, Saskia; Brosinsky, Arlena; Delgado, José; Güntner, Andreas; López-Tarazón, José A.; Bronstert, Axel
2016-04-01
Modellers are often faced with unsatisfactory model performance for a specific setup of a hydrological model. In these cases, the modeller may try to improve the setup by addressing selected causes for the model errors (i.e. data errors, structural errors). This leads to adding certain "model enhancements" (MEs), e.g. climate data based on more monitoring stations, improved calibration data, modifications in process formulations. However, deciding on which MEs to implement remains a matter of expert knowledge, guided by some sensitivity analysis at best. When multiple MEs have been implemented, a resulting improvement in model performance is not easily attributed, especially when considering different aspects of this improvement (e.g. better performance dynamics vs. reduced bias). In this study we present an approach for comparing the effect of multiple MEs in the face of multiple improvement aspects. A stepwise selection approach and structured plots help in addressing the multidimensionality of the problem. The approach is applied to a case study, which employs the meso-scale hydrosedimentological model WASA-SED for a sub-humid catchment. The results suggest that the effect of the MEs is quite diverse, with some MEs (e.g. augmented rainfall data) cause improvements for almost all aspects, while the effect of other MEs is restricted to few aspects or even deteriorate some. These specific results may not be generalizable. However, we suggest that based on studies like this, identifying the most promising MEs to implement may be facilitated.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
NASA Astrophysics Data System (ADS)
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
NASA Astrophysics Data System (ADS)
Smith, V.
2000-11-01
This report documents the development of analytical techniques required for interpreting and comparing space systems electromagnetic interference test data with commercial electromagnetic interference test data using NASA Specification SSP 30237A "Space Systems Electromagnetic Emission and Susceptibility Requirements for Electromagnetic Compatibility." The PSpice computer simulation results and the laboratory measurements for the test setups under study compare well. The study results, however, indicate that the transfer function required to translate test results of one setup to another is highly dependent on cables and their actual layout in the test setup. Since cables are equipment specific and are not specified in the test standards, developing a transfer function that would cover all cable types (random, twisted, or coaxial), sizes (gauge number and length), and layouts (distance from the ground plane) is not practical.
NASA Technical Reports Server (NTRS)
Smith, V.; Minor, J. L. (Technical Monitor)
2000-01-01
This report documents the development of analytical techniques required for interpreting and comparing space systems electromagnetic interference test data with commercial electromagnetic interference test data using NASA Specification SSP 30237A "Space Systems Electromagnetic Emission and Susceptibility Requirements for Electromagnetic Compatibility." The PSpice computer simulation results and the laboratory measurements for the test setups under study compare well. The study results, however, indicate that the transfer function required to translate test results of one setup to another is highly dependent on cables and their actual layout in the test setup. Since cables are equipment specific and are not specified in the test standards, developing a transfer function that would cover all cable types (random, twisted, or coaxial), sizes (gauge number and length), and layouts (distance from the ground plane) is not practical.
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
A two-factor error model for quantitative steganalysis
NASA Astrophysics Data System (ADS)
Böhme, Rainer; Ker, Andrew D.
2006-02-01
Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.
Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Jeff D.; Wong, Raimond; Swaminath, Anand
Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
MEMS deformable mirror for wavefront correction of large telescopes
NASA Astrophysics Data System (ADS)
Manhart, Sigmund; Vdovin, Gleb; Collings, Neil; Sodnik, Zoran; Nikolov, Susanne; Hupfer, Werner
2017-11-01
A 50 mm diameter membrane mirror was designed and manufactured at TU Delft. It is made from bulk silicon by micromachining - a technology primarily used for micro-electromechanical systems (MEMS). The mirror unit is equipped with 39 actuator electrodes and can be electrostatically deformed to correct wavefront errors in optical imaging systems. Performance tests on the deformable mirror were carried out at Astrium GmbH using a breadboard setup with a wavefront sensor and a closed-loop control system. It was found that the deformable membrane mirror is well suited for correction of low order wavefront errors as they must be expected in lightweighted space telescopes.
What errors do peer reviewers detect, and does training improve their ability to detect them?
Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard
2008-10-01
To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matney, J; Lian, J; Chera, B
2015-06-15
Introduction: Geometric uncertainties in daily patient setup can lead to variations in the planned dose, especially when using highly conformal techniques such as helical Tomotherapy. To account for the potential effect of geometric uncertainty, our clinical practice is to expand critical structures by 3mm expansion into planning risk volumes (PRV). The PRV concept assumes the spatial dose cloud is insensitive to patient positioning. However, no tools currently exist to determine if a Tomotherapy plan is robust to the effects of daily setup variation. We objectively quantified the impact of geometric uncertainties on the 3D doses to critical normal tissues duringmore » helical Tomotherapy. Methods: Using a Matlab-based program created and validated by Accuray (Madison, WI), the planned Tomotherapy delivery sinogram recalculated dose on shifted CT datasets. Ten head and neck patients were selected for analysis. To simulate setup uncertainty, the patient anatomy was shifted ±3mm in the longitudinal, lateral and vertical axes. For each potential shift, the recalculated doses to various critical normal tissues were compared to the doses delivered to the PRV in the original plan Results: 18 shifted scenarios created from Tomotherapy plans for three patients with head and neck cancers were analyzed. For all simulated setup errors, the maximum doses to the brainstem, spinal cord, parotids and cochlea were no greater than 0.6Gy of the respective original PRV maximum. Despite 3mm setup shifts, the minimum dose delivered to 95% of the CTVs and PTVs were always within 0.4Gy of the original plan. Conclusions: For head and neck sites treated with Tomotherapy, the use of a 3mm PRV expansion provide a reasonable estimate of the dosimetric effects of 3mm setup uncertainties. Similarly, target coverage appears minimally effected by a 3mm setup uncertainty. Data from a larger number of patients will be presented. Future work will include other anatomical sites.« less
SU-F-P-23: Setup Uncertainties for the Lung Stereotactic Body Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Q; Vigneri, P; Madu, C
2016-06-15
Purpose: The Exactrack X-ray system with six degree-of-freedom (6DoF) adjustment ability can be used for setup of lung stereotactic body radiation therapy. The setup uncertainties from ExacTrack 6D system were analyzed. Methods: The Exactrack X-ray 6D image guided radiotherapy system is used in our clinic. The system is an integration of 2 subsystems: (1): an infrared based optical position system and (2) a radiography kV x-ray imaging system. The infrared system monitors reflective body markers on the patient’s skin to assistant in the initial setup. The radiographic kV devices were used for patient positions verification and adjustment. The position verificationmore » was made by fusing the radiographs with the digitally reconstructed radiograph (DRR) images generated by simulation CT images using 6DoF fusion algorithms. Those results were recorded in our system. Gaussian functions were used to fit the data. Results: For 37 lung SBRT patients, the image registration results for the initial setup by using surface markers and for the verifications, were measured. The results were analyzed for 143 treatments. The mean values for the lateral, longitudinal, vertical directions were 0.1, 0.3 and 0.3mm, respectively. The standard deviations for the lateral, longitudinal and vertical directions were 0.62, 0.78 and 0.75mm respectively. The mean values for the rotations around lateral, longitudinal and vertical directions were 0.1, 0.2 and 0.4 degrees respectively, with standard deviations of 0.36, 0.34, and 0.42 degrees. Conclusion: The setup uncertainties for the lung SBRT cases by using Exactrack 6D system were analyzed. The standard deviations of the setup errors were within 1mm for all three directions, and the standard deviations for rotations were within 0.5 degree.« less
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Error threshold for color codes and random three-body Ising models.
Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A
2009-08-28
We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
ERIC Educational Resources Information Center
Quarm, Daisy
1981-01-01
Findings for couples (N=119) show wife's work, money, and spare time low between-spouse correlations are due in part to random measurement error. Suggests that increasing reliability of measures by creating multi-item indices can also increase correlations. Car purchase, vacation, and child discipline were not accounted for by random measurement…
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Quantum random number generator based on quantum nature of vacuum fluctuations
NASA Astrophysics Data System (ADS)
Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.
2017-11-01
Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Quantum-classical boundary for precision optical phase estimation
NASA Astrophysics Data System (ADS)
Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo
2017-12-01
Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.
Fundamental Limits of Delay and Security in Device-to-Device Communication
2013-01-01
systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a
NASA Astrophysics Data System (ADS)
Šiaudinytė, Lauryna; Molnar, Gabor; Köning, Rainer; Flügge, Jens
2018-05-01
Industrial application versatility of interferometric encoders increases the urge to measure several degrees of freedom. A novel grating interferometer containing a commercially available, minimized Michelson interferometer and three fibre-fed measurement heads is presented in this paper. Moreover, the arrangement is designed for simultaneous displacement measurements in two perpendicular planes. In the proposed setup, beam splitters are located in the fibre heads, therefore the grating is separated from the light source and the photo detector, which influence measurement results by generated heat. The operating principle of the proposed system as well as error sources influencing measurement results are discussed in this paper. Further, the benefits and shortcomings of the setup are presented. A simple Littrow-configuration-based design leads to a compact-size interferometric encoder suitable for multidimensional measurements.
One-step random mutagenesis by error-prone rolling circle amplification
Fujii, Ryota; Kitaoka, Motomitsu; Hayashi, Kiyoshi
2004-01-01
In vitro random mutagenesis is a powerful tool for altering properties of enzymes. We describe here a novel random mutagenesis method using rolling circle amplification, named error-prone RCA. This method consists of only one DNA amplification step followed by transformation of the host strain, without treatment with any restriction enzymes or DNA ligases, and results in a randomly mutated plasmid library with 3–4 mutations per kilobase. Specific primers or special equipment, such as a thermal-cycler, are not required. This method permits rapid preparation of randomly mutated plasmid libraries, enabling random mutagenesis to become a more commonly used technique. PMID:15507684
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Why a simulation system doesn`t match the plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, R.
1998-03-01
Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less
Experimental implementation of the Bacon-Shor code with 10 entangled photons
NASA Astrophysics Data System (ADS)
Gimeno-Segovia, Mercedes; Sanders, Barry C.
The number of qubits that can be effectively controlled in quantum experiments is growing, reaching a regime where small quantum error-correcting codes can be tested. The Bacon-Shor code is a simple quantum code that protects against the effect of an arbitrary single-qubit error. In this work, we propose an experimental implementation of said code in a post-selected linear optical setup, similar to the recently reported 10-photon GHZ generation experiment. In the procedure we propose, an arbitrary state is encoded into the protected Shor code subspace, and after undergoing a controlled single-qubit error, is successfully decoded. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center(NSF Grant PHY-1125565) with support of the Moore Foundation(GBMF-2644).
The Effect of Defense Contracting Requirements on Just-In-Time Implementation
1988-12-01
and purchasing efforts negatively impacted. The role of I11 contract uncertainty was weakest and had mixed effects. Difficult negotiations prior to...they recommend differs somewhat. Shingo stresses the use of setup reduction and layout changes early in his sequence with production leveling occurring...consciousness toward quality improvement, and use of foolproof mechanisms to prevent errors), higher level government quality standards stress separate
Balter, James M; Antonuk, Larry E
2008-01-01
In-room radiography is not a new concept for image-guided radiation therapy. Rapid advances in technology, however, have made this positioning method convenient, and thus radiograph-based positioning has propagated widely. The paradigms for quality assurance of radiograph-based positioning include imager performance, systems integration, infrastructure, procedure documentation and testing, and support for positioning strategy implementation.
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, M; Feigenberg, S
Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patientmore » position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures favorable day-to-day setup accuracy. DIBH setup appears to be more uncertain and this would be the patient group who will definitely benefit from the extra information of 3D surface setup.« less
Amonkar, Priyanka; Mankar, Madhavi Jogesh; Thatkar, Pandurang; Sawardekar, Pradeep; Goel, Rajesh; Anjenaya, Seema
2018-01-01
The traditional concept of family in India to provide support to the elderly is changing soon with disintegration of joint families. In this scenario the concept of old age homes (OAHs) is gaining momentum and the number of people seeking OAH care is rapidly increasing. However, not much is known about the quality of life (QOL) of Indian elderly staying in the OAH setup. To assess and compare the Health status, Quality of Life and Depression in elderly people living in OAHs & within family using WHOQOL -OLD questionnaire & Geriatric Depression Scale. A cross sectional study was conducted in elderly aged above 60 years of age. After taking a written consent and matching for age and sex & socioeconomic status, 60 elderly from OAHs & 120 elderly living within family setup were selected randomly. The WHOQOL-OLD standard questionnaire & GDS were used to assess quality of life & depression in elderly. The QOL of elderly in domains of autonomy, past present & future activities, social participation and intimacy was better in family setup (60.62, 70.62, 66.14 and 58.43) as compared to OAHs (51.35, 62.91, 59.47and 41.16) (p<0.05). There was statistically significant difference in mean geriatric depression scores of both the group (3.96 within family setup and 5.76 in OAH's). Quality of life of elderly within family setup was better as compared to elderly in OAHs.
NASA Astrophysics Data System (ADS)
Bhattacharyya, Kaustuve; den Boef, Arie; Noot, Marc; Adam, Omer; Grzela, Grzegorz; Fuchs, Andreas; Jak, Martin; Liao, Sax; Chang, Ken; Couraudon, Vincent; Su, Eason; Tzeng, Wilson; Wang, Cathy; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Wang, Y. C.; Cheng, Kevin; Ke, Chih-Ming; Terng, L. G.
2017-03-01
The optical coupling between gratings in diffraction-based overlay triggers a swing-curve1,6 like response of the target's signal contrast and overlay sensitivity through measurement wavelengths and polarizations. This means there are distinct measurement recipes (wavelength and polarization combinations) for a given target where signal contrast and overlay sensitivity are located at the optimal parts of the swing-curve that can provide accurate and robust measurements. Some of these optimal recipes can be the ideal choices of settings for production. The user has to stay away from the non-optimal recipe choices (that are located on the undesirable parts of the swing-curve) to avoid possibilities to make overlay measurement error that can be sometimes (depending on the amount of asymmetry and stack) in the order of several "nm". To accurately identify these optimum operating areas of the swing-curve during an experimental setup, one needs to have full-flexibility in wavelength and polarization choices. In this technical publication, a diffraction-based overlay (DBO) measurement tool with many choices of wavelengths and polarizations is utilized on advanced production stacks to study swing-curves. Results show that depending on the stack and the presence of asymmetry, the swing behavior can significantly vary and a solid procedure is needed to identify a recipe during setup that is robust against variations in stack and grating asymmetry. An approach is discussed on how to use this knowledge of swing-curve to identify recipe that is not only accurate at setup, but also robust over the wafer, and wafer-to-wafer. KPIs are reported in run-time to ensure the quality / accuracy of the reading (basically acting as an error bar to overlay measurement).
Wetmore, Douglas; Goldberg, Andrew; Gandhi, Nishant; Spivack, John; McCormick, Patrick; DeMaria, Samuel
2016-10-01
Anaesthesiologists work in a high stress, high consequence environment in which missed steps in preparation may lead to medical errors and potential patient harm. The pre-anaesthetic induction period has been identified as a time in which medical errors can occur. The Anesthesia Patient Safety Foundation has developed a Pre-Anesthetic Induction Patient Safety (PIPS) checklist. We conducted this study to test the effectiveness of this checklist, when embedded in our institutional Anesthesia Information Management System (AIMS), on resident performance in a simulated environment. Using a randomised, controlled, observer-blinded design, we compared performance of anaesthesiology residents in a simulated operating room under production pressure using a checklist in completing a thorough pre-anaesthetic induction evaluation and setup with that of residents with no checklist. The checklist was embedded in the simulated operating room's electronic medical record. Data for 38 anaesthesiology residents shows a statistically significant difference in performance in pre-anaesthetic setup and evaluation as scored by blinded raters (maximum score 22 points), with the checklist group performing better by 7.8 points (p<0.01). The effects of gender and year of residency on total score were not significant. Simulation duration (time to anaesthetic agent administration) was increased significantly by the use of the checklist. Required use of a pre-induction checklist improves anaesthesiology resident performance in a simulated environment. The PIPS checklist as an integrated part of a departmental AIMS warrant further investigation as a quality measure. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Entropy of space-time outcome in a movement speed-accuracy task.
Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M
2015-12-01
The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
Digital implementation of a laser frequency stabilisation technique in the telecommunications band
NASA Astrophysics Data System (ADS)
Jivan, Pritesh; van Brakel, Adriaan; Manuel, Rodolfo Martínez; Grobler, Michael
2016-02-01
Laser frequency stabilisation in the telecommunications band was realised using the Pound-Drever-Hall (PDH) error signal. The transmission spectrum of the Fabry-Perot cavity was used as opposed to the traditionally used reflected spectrum. A comparison was done using an analogue as well as a digitally implemented system. This study forms part of an initial step towards developing a portable optical time and frequency standard. The frequency discriminator used in the experimental setup was a fibre-based Fabry-Perot etalon. The phase sensitive system made use of the optical heterodyne technique to detect changes in the phase of the system. A lock-in amplifier was used to filter and mix the input signals to generate the error signal. This error signal may then be used to generate a control signal via a PID controller. An error signal was realised at a wavelength of 1556 nm which correlates to an optical frequency of 1.926 THz. An implementation of the analogue PDH technique yielded an error signal with a bandwidth of 6.134 GHz, while a digital implementation yielded a bandwidth of 5.774 GHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan, Nuzhat; Balik, Salim; Hugo, Geoffrey D.
Purpose: To analyze primary tumor (PT) and lymph node (LN) position changes relative to each other and relative to anatomic landmarks during conventionally fractionated radiation therapy for patients with locally advanced lung cancer. Methods and Materials: In 12 patients with locally advanced non-small cell lung cancer PT, LN, carina, and 1 thoracic vertebra were manually contoured on weekly 4-dimensional fan-beam CT scans. Systematic and random interfraction displacements of all contoured structures were identified in the 3 cardinal directions, and resulting setup margins were calculated. Time trends and the effect of volume changes on displacements were analyzed. Results: Three-dimensional displacement vectorsmore » and systematic/random interfraction displacements were smaller for carina than for vertebra both for PT and LN. For PT, mean (SD) 3-dimensional displacement vectors with carina-based alignment were 7 (4) mm versus 9 (5) mm with bony anatomy (P<.0001). For LN, smaller displacements were found with carina- (5 [3] mm, P<.0001) and vertebra-based (6 [3] mm, P=.002) alignment compared with using PT for setup (8 [5] mm). Primary tumor and LN displacements relative to bone and carina were independent (P>.05). Displacements between PT and bone (P=.04) and between PT and LN (P=.01) were significantly correlated with PT volume regression. Displacements between LN and carina were correlated with LN volume change (P=.03). Conclusions: Carina-based setup results in a more reproducible PT and LN alignment than bony anatomy setup. Considering the independence of PT and LN displacement and the impact of volume regression on displacements over time, repeated CT imaging even with PT-based alignment is recommended in locally advanced disease.« less
QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES
The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...
Random Versus Nonrandom Peer Review: A Case for More Meaningful Peer Review.
Itri, Jason N; Donithan, Adam; Patel, Sohil H
2018-05-10
Random peer review programs are not optimized to discover cases with diagnostic error and thus have inherent limitations with respect to educational and quality improvement value. Nonrandom peer review offers an alternative approach in which diagnostic error cases are targeted for collection during routine clinical practice. The objective of this study was to compare error cases identified through random and nonrandom peer review approaches at an academic center. During the 1-year study period, the number of discrepancy cases and score of discrepancy were determined from each approach. The nonrandom peer review process collected 190 cases, of which 60 were scored as 2 (minor discrepancy), 94 as 3 (significant discrepancy), and 36 as 4 (major discrepancy). In the random peer review process, 1,690 cases were reviewed, of which 1,646 were scored as 1 (no discrepancy), 44 were scored as 2 (minor discrepancy), and none were scored as 3 or 4. Several teaching lessons and quality improvement measures were developed as a result of analysis of error cases collected through the nonrandom peer review process. Our experience supports the implementation of nonrandom peer review as a replacement to random peer review, with nonrandom peer review serving as a more effective method for collecting diagnostic error cases with educational and quality improvement value. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Roghani, Taybeh; Khalkhali Zavieh, Minoo; Rahimi, Abbas; Talebian, Saeed; Manshadi, Farideh Dehghan; Akbarzadeh Baghban, Alireza; King, Nicole; Katzman, Wendy
2018-01-25
The purpose of this study was to investigate the intra-rater reliability and validity of a designed load cell setup for the measurement of back extensor muscle force and endurance. The study sample included 19 older women with hyperkyphosis, mean age 67.0 ± 5.0 years, and 14 older women without hyperkyphosis, mean age 63.0 ± 6.0 years. Maximum back extensor force and endurance were measured in a sitting position with a designed load cell setup. Tests were performed by the same examiner on two separate days within a 72-hour interval. The intra-rater reliability of the measurements was analyzed using intraclass correlation coefficient (ICC), standard errors of measurement (SEM), and minimal detectable change (MDC). The validity of the setup was determined using Pearson correlation analysis and independent t-test. Using our designed load cell, the values of ICC indicated very high reliability of force measurement (hyperkyphosis group: 0.96, normal group: 0.97) and high reliability of endurance measurement (hyperkyphosis group: 0.82, normal group: 0.89). For all tests, the values of SEM and MDC were low in both groups. A significant correlation between two documented forces (load cell force and target force) and significant differences in the muscle force and endurance among the two groups were found. The measurements of static back muscle force and endurance are reliable and valid with our designed setup in older women with and without hyperkyphosis.
Bathymetric surveying with GPS and heave, pitch, and roll compensation
Work, P.A.; Hansen, M.; Rogers, W.E.
1998-01-01
Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.
An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis
NASA Technical Reports Server (NTRS)
Wenger, David Paul
1991-01-01
The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.
NASA Astrophysics Data System (ADS)
Zehe, E.; Klaus, J.
2011-12-01
Rapid flow in connected preferential flow paths is crucial for fast transport of water and solutes through soils, especially at tile drained field sites. The present study tests whether an explicit treatment of worm burrows is feasible for modeling water flow, bromide and pesticide transport in structured heterogeneous soils with a 2-dimensional Richards based model. The essence is to represent worm burrows as morphologically connected paths of low flow resistance and low retention capacity in the spatially highly resolved model domain. The underlying extensive database to test this approach was collected during an irrigation experiment, which investigated transport of bromide and the herbicide Isoproturon at a 900 sqm tile drained field site. In a first step we investigated whether the inherent uncertainty in key data causes equifinality i.e. whether there are several spatial model setups that reproduce tile drain event discharge in an acceptable manner. We found a considerable equifinality in the spatial setup of the model, when key parameters such as the area density of worm burrows and the maximum volumetric water flows inside these macropores were varied within the ranges of either our measurement errors or measurements reported in the literature. Thirteen model runs yielded a Nash-Sutcliffe coefficient of more than 0.9. Also, the flow volumes were in good accordance and peak timing errors where less than or equal to 20 min. In the second step we investigated thus whether this "equifinality" in spatial model setups may be reduced when including the bromide tracer data into the model falsification process. We simulated transport of bromide for the 13 spatial model setups, which performed best with respect to reproduce tile drain event discharge, without any further calibration. Four of this 13 model setups allowed to model bromide transport within fixed limits of acceptability. Parameter uncertainty and equifinality could thus be reduced. Thirdly, we selected one of those four setups for simulating transport of Isoproturon, which was applied the day before the irrigation experiment, and tested different parameter combinations to characterise adsorption according to the footprint data base. Simulations could, however, only reproduce the observed event based leaching behaviour, when we allowed for retardation coefficients that were very close to one. This finding is consistent with observations various field observations. We conclude: a) A realistic representation of dominating structures and their topology is of key importance for predicting preferential water and mass flows at tile drained hillslopes. b) Parameter uncertainty and equifinality could be reduced, but a system inherent equifinality in a 2-dimensional Richards based model has to be accepted.
An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response
Stipčević, Mario; Ursin, Rupert
2015-01-01
Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576
A generator for unique quantum random numbers based on vacuum states
NASA Astrophysics Data System (ADS)
Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd
2010-10-01
Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
ERIC Educational Resources Information Center
Byun, Tara McAllister
2017-01-01
Purpose: This study documented the efficacy of visual-acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of…
Statistical Analysis Experiment for Freshman Chemistry Lab.
ERIC Educational Resources Information Center
Salzsieder, John C.
1995-01-01
Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
Haworth, Annette; Kearvell, Rachel; Greer, Peter B; Hooton, Ben; Denham, James W; Lamb, David; Duchesne, Gillian; Murray, Judy; Joseph, David
2009-03-01
A multi-centre clinical trial for prostate cancer patients provided an opportunity to introduce conformal radiotherapy with dose escalation. To verify adequate treatment accuracy prior to patient recruitment, centres submitted details of a set-up accuracy study (SUAS). We report the results of the SUAS, the variation in clinical practice and the strategies used to help centres improve treatment accuracy. The SUAS required each of the 24 participating centres to collect data on at least 10 pelvic patients imaged on a minimum of 20 occasions. Software was provided for data collection and analysis. Support to centres was provided through educational lectures, the trial quality assurance team and an information booklet. Only two centres had recently carried out a SUAS prior to the trial opening. Systematic errors were generally smaller than those previously reported in the literature. The questionnaire identified many differences in patient set-up protocols. As a result of participating in this QA activity more than 65% of centres improved their treatment delivery accuracy. Conducting a pre-trial SUAS has led to improvement in treatment delivery accuracy in many centres. Treatment techniques and set-up accuracy varied greatly, demonstrating a need to ensure an on-going awareness for such studies in future trials and with the introduction of dose escalation or new technologies.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 4, Appendix C
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Statistical analysis data is supplied along with write pulse width, read cycle time, write cycle time, and chip enable time data.
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
Force estimation from OCT volumes using 3D CNNs.
Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander
2018-07-01
Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.
Adequate margins for random setup uncertainties in head-and-neck IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Astreinidou, Eleftheria; Bel, Arjan; Raaijmakers, Cornelis P.J.
2005-03-01
Purpose: To investigate the effect of random setup uncertainties on the highly conformal dose distributions produced by intensity-modulated radiotherapy (IMRT) for clinical head-and-neck cancer patients and to determine adequate margins to account for those uncertainties. Methods and materials: We have implemented in our clinical treatment planning system the possibility of simulating normally distributed patient setup displacements, translations, and rotations. The planning CT data of 8 patients with Stage T1-T3N0M0 oropharyngeal cancer were used. The clinical target volumes of the primary tumor (CTV{sub primary}) and of the lymph nodes (CTV{sub elective}) were expanded by 0.0, 1.5, 3.0, and 5.0 mm inmore » all directions, creating the planning target volumes (PTVs). We performed IMRT dose calculation using our class solution for each PTV margin, resulting in the conventional static plans. Then, the system recalculated the plan for each positioning displacement derived from a normal distribution with {sigma} = 2 mm and {sigma} = 4 mm (standard deviation) for translational deviations and {sigma} = 1 deg for rotational deviations. The dose distributions of the 30 fractions were summed, resulting in the actual plan. The CTV dose coverage of the actual plans was compared with that of the static plans. Results: Random translational deviations of {sigma} = 2 mm and rotational deviations of {sigma} = 1 deg did not affect the CTV{sub primary} volume receiving 95% of the prescribed dose (V{sub 95}) regardless of the PTV margin used. A V{sub 95} reduction of 3% and 1% for a 0.0-mm and 1.5-mm PTV margin, respectively, was observed for {sigma} = 4 mm. The V{sub 95} of the CTV{sub elective} contralateral was approximately 1% and 5% lower than that of the static plan for {sigma} = 2 mm and {sigma} = 4 mm, respectively, and for PTV margins < 5.0 mm. An additional reduction of 1% was observed when rotational deviations were included. The same effect was observed for the CTV{sub elective} ipsilateral but with smaller dose differences than those for the contralateral side. The effect of the random uncertainties on the mean dose to the parotid glands was not significant. The maximal dose to the spinal cord increased by a maximum of 3 Gy. Conclusions: The margins to account for random setup uncertainties, in our clinical IMRT solution, should be 1.5 mm and 3.0 mm in the case of {sigma} = 2 mm and {sigma} = 4 mm, respectively, for the CTV{sub primary}. Larger margins (5.0 mm), however, should be applied to the CTV{sub elective}, if the goal of treatment is a V{sub 95} value of at least 99%.« less
Intergration of system identification and robust controller designs for flexible structures in space
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Lew, Jiann-Shiun
1990-01-01
An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
Zhu, Jian; Bai, Tong; Gu, Jiabing; Sun, Ziwen; Wei, Yumei; Li, Baosheng; Yin, Yong
2018-04-27
To evaluate the effect of pretreatment megavoltage computed tomographic (MVCT) scan methodology on setup verification and adaptive dose calculation in helical TomoTherapy. Both anthropomorphic heterogeneous chest and pelvic phantoms were planned with virtual targets by TomoTherapy Physicist Station and were scanned with TomoTherapy megavoltage image-guided radiotherapy (IGRT) system consisted of six groups of options: three different acquisition pitches (APs) of 'fine', 'normal' and 'coarse' were implemented by multiplying 2 different corresponding reconstruction intervals (RIs). In order to mimic patient setup variations, each phantom was shifted 5 mm away manually in three orthogonal directions respectively. The effect of MVCT scan options was analyzed in image quality (CT number and noise), adaptive dose calculation deviations and positional correction variations. MVCT scanning time with pitch of 'fine' was approximately twice of 'normal' and 3 times more than 'coarse' setting, all which will not be affected by different RIs. MVCT with different APs delivered almost identical CT numbers and image noise inside 7 selected regions with various densities. DVH curves from adaptive dose calculation with serial MVCT images acquired by varied pitches overlapped together, where as there are no significant difference in all p values of intercept & slope of emulational spinal cord (p = 0.761 & 0.277), heart (p = 0.984 & 0.978), lungs (p = 0.992 & 0.980), soft tissue (p = 0.319 & 0.951) and bony structures (p = 0.960 & 0.929) between the most elaborated and the roughest serials of MVCT. Furthermore, gamma index analysis shown that, compared to the dose distribution calculated on MVCT of 'fine', only 0.2% or 1.1% of the points analyzed on MVCT of 'normal' or 'coarse' do not meet the defined gamma criterion. On chest phantom, all registration errors larger than 1 mm appeared at superior-inferior axis, which cannot be avoided with the smallest AP and RI. On pelvic phantom, craniocaudal errors are much smaller than chest, however, AP of 'coarse' presents larger registration errors which can be reduced from 2.90 mm to 0.22 mm by registration technique of 'full image'. AP of 'coarse' with RI of 6 mm is recommended in adaptive radiotherapy (ART) planning to provide craniocaudal longer and faster MVCT scan, while registration technique of 'full image' should be used to avoid large residual error. Considering the trade-off between IGRT and ART, AP of 'normal' with RI of 2 mm was highly recommended in daily practice.
NASA Astrophysics Data System (ADS)
Gholipour Peyvandi, R.; Islami Rad, S. Z.
2017-12-01
The determination of the volume fraction percentage of the different phases flowing in vessels using transmission gamma rays is a conventional method in petroleum and oil industries. In some cases, with access only to the one side of the vessels, attention was drawn toward backscattered gamma rays as a desirable choice. In this research, the volume fraction percentage was measured precisely in water-gasoil-air three-phase flows by using the backscatter gamma ray technique andthe multilayer perceptron (MLP) neural network. The volume fraction determination in three-phase flows requires two gamma radioactive sources or a dual-energy source (with different energies) while in this study, we used just a 137Cs source (with the single energy) and a NaI detector to analyze backscattered gamma rays. The experimental set-up provides the required data for training and testing the network. Using the presented method, the volume fraction was predicted with a mean relative error percentage less than 6.47%. Also, the root mean square error was calculated as 1.60. The presented set-up is applicable in some industries with limited access. Also, using this technique, the cost, radiation safety and shielding requirements are minimized toward the other proposed methods.
Vidovic, Luka; Majaron, Boris
2014-02-01
Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.
Experiments on robot-assisted navigated drilling and milling of bones for pedicle screw placement.
Ortmaier, T; Weiss, H; Döbele, S; Schreiber, U
2006-12-01
This article presents experimental results for robot-assisted navigated drilling and milling for pedicle screw placement. The preliminary study was carried out in order to gain first insights into positioning accuracies and machining forces during hands-on robotic spine surgery. Additionally, the results formed the basis for the development of a new robot for surgery. A simplified anatomical model is used to derive the accuracy requirements. The experimental set-up consists of a navigation system and an impedance-controlled light-weight robot holding the surgical instrument. The navigation system is used to position the surgical instrument and to compensate for pose errors during machining. Holes are drilled in artificial bone and bovine spine. A quantitative comparison of the drill-hole diameters was achieved using a computer. The interaction forces and pose errors are discussed with respect to the chosen machining technology and control parameters. Within the technological boundaries of the experimental set-up, it is shown that the accuracy requirements can be met and that milling is superior to drilling. It is expected that robot assisted navigated surgery helps to improve the reliability of surgical procedures. Further experiments are necessary to take the whole workflow into account. Copyright 2006 John Wiley & Sons, Ltd.
What Randomized Benchmarking Actually Measures
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...
2017-09-28
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Zi, Fei; Wu, Xuejian; Zhong, Weicheng; Parker, Richard H; Yu, Chenghui; Budker, Simon; Lu, Xuanhui; Müller, Holger
2017-04-01
We present a hybrid laser frequency stabilization method combining modulation transfer spectroscopy (MTS) and frequency modulation spectroscopy (FMS) for the cesium D2 transition. In a typical pump-probe setup, the error signal is a combination of the DC-coupled MTS error signal and the AC-coupled FMS error signal. This combines the long-term stability of the former with the high signal-to-noise ratio of the latter. In addition, we enhance the long-term frequency stability with laser intensity stabilization. By measuring the frequency difference between two independent hybrid spectroscopies, we investigate the short-and long-term stability. We find a long-term stability of 7.8 kHz characterized by a standard deviation of the beating frequency drift over the course of 10 h and a short-term stability of 1.9 kHz characterized by an Allan deviation of that at 2 s of integration time.
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
Large aluminium convex mirror for the cryo-optical test of the Planck primary reflector
NASA Astrophysics Data System (ADS)
Gloesener, P.; Flébus, C.; Cola, M.; Roose, S.; Stockman, Y.; de Chambure, D.
2017-11-01
In the frame of the PLANCK mission telescope development, it is requested to measure the reflector changes of the surface figure error (SFE) with respect to the best ellipsoid, between 293 K and 50 K, with 1 μm RMS accuracy. To achieve this, Infra Red interferometry has been selected and a dedicated thermo mechanical set-up has been constructed. In order to realise the test set-up for this reflector, a large aluminium convex mirror with radius of 19500 mm has been manufactured. The mirror has to operate in a cryogenic environment lower than 30 K, and has a contribution to the RMS WFE with less than 1 μm between room temperature and cryogenic temperature. This paper summarises the design, manufacturing and characterisation of this mirror, showing it has fulfilled its requirements.
NASA Astrophysics Data System (ADS)
Jacobsen, M. K.; Liu, W.; Li, B.
2012-09-01
In this paper, a high pressure setup is presented for performing simultaneous measurements of Seebeck coefficient and thermal diffusivity in multianvil apparatus for the purpose of enhancing the study of transport phenomena. Procedures for the derivation of Seebeck coefficient and thermal diffusivity/conductivity, as well as their associated sources of errors, are presented in detail, using results obtained on the filled skutterudite, Ce0.8Fe3CoSb12, up to 12 GPa at ambient temperature. Together with recent resistivity and sound velocity measurements in the same apparatus, these developments not only provide the necessary data for a self-consistent and complete characterization of the figure of merit of thermoelectric materials under pressure, but also serve as an important tool for furthering our knowledge of the dynamics and interplay between these transport phenomena.
Jacobsen, M K; Liu, W; Li, B
2012-09-01
In this paper, a high pressure setup is presented for performing simultaneous measurements of Seebeck coefficient and thermal diffusivity in multianvil apparatus for the purpose of enhancing the study of transport phenomena. Procedures for the derivation of Seebeck coefficient and thermal diffusivity/conductivity, as well as their associated sources of errors, are presented in detail, using results obtained on the filled skutterudite, Ce(0.8)Fe(3)CoSb(12,) up to 12 GPa at ambient temperature. Together with recent resistivity and sound velocity measurements in the same apparatus, these developments not only provide the necessary data for a self-consistent and complete characterization of the figure of merit of thermoelectric materials under pressure, but also serve as an important tool for furthering our knowledge of the dynamics and interplay between these transport phenomena.
Elongation measurement using 1-dimensional image correlation method
NASA Astrophysics Data System (ADS)
Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan
2016-11-01
Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation
NASA Astrophysics Data System (ADS)
Lychak, Oleh V.; Holyns'kiy, Ivan S.
2016-03-01
The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.
Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes
NASA Astrophysics Data System (ADS)
Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.
2017-11-01
Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.
NASA Astrophysics Data System (ADS)
Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.
2017-08-01
Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.
Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra
2016-08-05
In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, D; Moirano, J; Kanal, K
Purpose: A fundamental measure performed during an annual physics CT evaluation confirms that system displayed CTDIvol nearly matches the independently measured value in phantom. For wide-beam (z-direction) CT scanners, AAPM Report 111 defined an ideal measurement method; however, the method often lacks practicality. The purpose of this preliminary study is to develop a set of conversion factors for a wide-beam CT scanner, relating the CTDIvol measured with a conventional setup (single CTDI phantom) versus the AAPM Report 111 approach (three abutting CTDI phantoms). Methods: For both the body CTDI and head CTDI, two acquisition setups were used: A) conventional singlemore » phantom and B) triple phantom. Of primary concern were the larger nominal beam widths for which a standard CTDI phantom setup would not provide adequate scatter conditions. Nominal beam width (160 or 120 mm) and kVp (100, 120, 140) were modulated based on the underlying clinical protocol. Exposure measurements were taken using a CT pencil ion chamber in the center and 12 o’clock position, and CTDIvol was calculated with ‘nT’ limited to 100 mm. A conversion factor (CF) was calculated as the ratio of CTDIvol measured in setup B versus setup A. Results: For body CTDI, the CF ranged from 1.04 up to 1.10, indicating a 4–10% difference between usage of one and three phantoms. For a nominal beam width of 160 mm, the CF did vary with selected kVp. For head CTDI at nominal beam widths of 120 and 160 mm, the CF was 1.00 and 1.05, respectively, independent of the kVp used (100, 120, and 140). Conclusions: A clear understanding of the manufacturer method of estimating the displayed CTDIvol is important when interpreting annual test results, as the acquisition setup may lead to an error of up to 10%. With appropriately defined CF, single phantom use is feasible.« less
Impacts of wave-induced circulation in the surf zone on wave setup
NASA Astrophysics Data System (ADS)
Guérin, Thomas; Bertin, Xavier; Coulombier, Thibault; de Bakker, Anouk
2018-03-01
Wave setup corresponds to the increase in mean water level along the coast associated with the breaking of short-waves and is of key importance for coastal dynamics, as it contributes to storm surges and the generation of undertows. Although overall well explained by the divergence of the momentum flux associated with short waves in the surf zone, several studies reported substantial underestimations along the coastline. This paper investigates the impacts of the wave-induced circulation that takes place in the surf zone on wave setup, based on the analysis of 3D modelling results. A 3D phase-averaged modelling system using a vortex force formalism is applied to hindcast an unpublished field experiment, carried out at a dissipative beach under moderate to very energetic wave conditions (Hm 0 = 6m at breaking and Tp = 22s). When using an adaptive wave breaking parameterisation based on the beach slope, model predictions for water levels, short waves and undertows improved by about 30%, with errors reducing to 0.10 m, 0.10 m and 0.09 m/s, respectively. The analysis of model results suggests a very limited impact of the vertical circulation on wave setup at this dissipative beach. When extending this analysis to idealized simulations for different beach slopes ranging from 0.01 to 0.05, it shows that the contribution of the vertical circulation (horizontal and vertical advection and vertical viscosity terms) becomes more and more relevant as the beach slope increases. In contrast, for a given beach slope, the wave height at the breaking point has a limited impact on the relative contribution of the vertical circulation on the wave setup. For a slope of 0.05, the contribution of the terms associated with the vertical circulation accounts for up to 17% (i.e. a 20% increase) of the total setup at the shoreline, which provides a new explanation for the underestimations reported in previously published studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.
Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, M; Kim, T; Kang, S
Purpose: The purpose of this work is to develop a new patient set-up monitoring system using force sensing resistor (FSR) sensors that can confirm pressure of contact surface and evaluate its feasibility. Methods: In this study, we focused on develop the patient set-up monitoring system to compensate for the limitation of existing optical based monitoring system, so the developed system can inform motion in the radiation therapy. The set-up monitoring system was designed consisting of sensor units (FSR sensor), signal conditioning devices (USB cable/interface electronics), a control PC, and a developed analysis software. The sensor unit was made by attachingmore » FSR sensor and dispersing pressure sponge to prevent error which is caused by concentrating specific point. Measured signal from the FSR sensor was sampled to arduino mega 2560 microcontroller, transferred to control PC by using serial communication. The measured data went through normalization process. The normalized data was displayed through the developed graphic user interface (GUI) software. The software was designed to display a single sensor unit intensity (maximum 16 sensors) and display 2D pressure distribution (using 16 sensors) according to the purpose. Results: Changes of pressure value according to motion was confirmed by the developed set-up monitoring system. Very small movement such as little physical change in appearance can be confirmed using a single unit and using 2D pressure distribution. Also, the set-up monitoring system can observe in real time. Conclusion: In this study, we developed the new set-up monitoring system using FSR sensor. Especially, we expect that the new set-up monitoring system is suitable for motion monitoring of blind area that is hard to confirm existing optical system and compensate existing optical based monitoring system. As a further study, an integrated system will be constructed through correlation of existing optical monitoring system. This work was supported by the Industrial R&D program of MOTIE/KEIT. [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less
Testing the Recognition and Perception of Errors in Context
ERIC Educational Resources Information Center
Brandenburg, Laura C.
2015-01-01
This study tests the recognition of errors in context and whether the presence of errors affects the reader's perception of the writer's ethos. In an experimental, posttest only design, participants were randomly assigned a memo to read in an online survey: one version with errors and one version without. Of the six intentional errors in version…
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Reducing the overlay metrology sensitivity to perturbations of the measurement stack
NASA Astrophysics Data System (ADS)
Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen
2017-03-01
Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.
Fabrication of ф 160 mm convex hyperbolic mirror for remote sensing instrument
NASA Astrophysics Data System (ADS)
Kuo, Ching-Hsiang; Yu, Zong-Ru; Ho, Cheng-Fang; Hsu, Wei-Yao; Chen, Fong-Zhi
2012-10-01
In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the Fabrication of ф160 mm Convex Hyperbolic Mirror for Remote Sensing Instrument160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
Perceptions of Randomness: Why Three Heads Are Better than Four
ERIC Educational Resources Information Center
Hahn, Ulrike; Warren, Paul A.
2009-01-01
A long tradition of psychological research has lamented the systematic errors and biases in people's perception of the characteristics of sequences generated by a random mechanism such as a coin toss. It is proposed that once the likely nature of people's actual experience of such processes is taken into account, these "errors" and "biases"…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
Biaxial Anisotropic Material Development and Characterization using Rectangular to Square Waveguide
2015-03-26
holder 68 Figure 29. Measurement Setup with Test port cables and Network Analyzer VNA and the waveguide adapters are torqued to specification with...calibrated torque wrenches and waveguide flanges are aligned using precision alignment pins. A TRL calibration is performed prior to measuring the sample as...set to 0.0001. This enables the Frequency domain solver to refine the mesh until the tolerance is achieved. Tightening the error tolerance results in
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery
Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less
Effects of monetary reward and punishment on information checking behaviour: An eye-tracking study.
Li, Simon Y W; Cox, Anna L; Or, Calvin; Blandford, Ann
2018-07-01
The aim of the present study was to investigate the effect of error consequence, as reward or punishment, on individuals' checking behaviour following data entry. This study comprised two eye-tracking experiments that replicate and extend the investigation of Li et al. (2016) into the effect of monetary reward and punishment on data-entry performance. The first experiment adopted the same experimental setup as Li et al. (2016) but additionally used an eye tracker. The experiment validated Li et al. (2016) finding that, when compared to no error consequence, both reward and punishment led to improved data-entry performance in terms of reducing errors, and that no performance difference was found between reward and punishment. The second experiment extended the earlier study by associating error consequence to each individual trial by providing immediate performance feedback to participants. It was found that gradual increment (i.e. reward feedback) also led to significantly more accurate performance than no error consequence. It is unclear whether gradual increment is more effective than gradual decrement because of the small sample size tested. However, this study reasserts the effectiveness of reward on data-entry performance. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khallaf, Haitham S.; Elfiqi, Abdulaziz E.; Shalaby, Hossam M. H.; Sampei, Seiichi; Obayya, Salah S. A.
2018-06-01
We investigate the performance of hybrid L-ary quadrature-amplitude modulation-multi-pulse pulse-position modulation (LQAM-MPPM) techniques over exponentiated Weibull (EW) fading free-space optical (FSO) channel, considering both weather and pointing-error effects. Upper bound and approximate-tight upper bound expressions for the bit-error rate (BER) of LQAM-MPPM techniques over EW FSO channels are obtained, taking into account the effects of fog, beam divergence, and pointing-error. Setup block diagram for both the transmitter and receiver of the LQAM-MPPM/FSO system are introduced and illustrated. The BER expressions are evaluated numerically and the results reveal that LQAM-MPPM technique outperforms ordinary LQAM and MPPM schemes under different fading levels and weather conditions. Furthermore, the effect of modulation-index is investigated and it turned out that a modulation-index greater than 0.4 is required in order to optimize the system performance. Finally, the effect of pointing-error introduces a great power penalty on the LQAM-MPPM system performance. Specifically, at a BER of 10-9, pointing-error introduces power penalties of about 45 and 28 dB for receiver aperture sizes of DR = 50 and 200 mm, respectively.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yueqi; Lava, Pascal; Reu, Phillip
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...
2015-12-23
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Qualified measurement setup of polarization extinction ratio for Panda PMF with LC/UPC connector
NASA Astrophysics Data System (ADS)
Thongdaeng, Rutsuda; Worasucheep, Duang-rudee; Ngiwprom, Adisak
2018-03-01
Polarization Extinction Ratio (PER) is one of the key parameters for Polarization Maintaining Fiber (PMF) connector. Based on our previous studies, the bending radius of fiber greater than 1.5 cm will not affect the insertion loss of PMF [1]. Moreover, the measured PER of Panda PMF with LC/UPC connectors is more stable when that PMF is coiled around a hot rod with a minimum of 3-cm in diameter at 75°C temperature [2]. Hence, the hot rod with less constrained 6-cm in diameter at constant 75°C was selected for this PER measurement. Two PER setups were verified and compared for measuring LC/UPC PMF connectors. The Polarized Laser Source (PLS) at 1550 nm wavelength and PER meter from OZ Optics were used in both setups, in which the measured connector was connected to PLS at 0° angle while the other end was connected to PER meter. In order to qualify our setups, the percentage of Repeatability and Reproducibility (%R&R) were tested and calculated. In each setup, the PER measurement was repeated 3 trials by 3 appraisers using 10 LC/UPC PMF connectors (5 LC/UPC PMF patchcords with 3.5+/-0.5 meters in length) in random order. The 1st setup, PMF was coiled at a larger 20-cm diameter for 3 to 5 loops and left in room temperature during the test. The 2nd setup, PMF was coiled around a hot rod at constant 75°C with 6-cm diameter for 8 to 10 loops for at least 5 minutes before testing. There are 3 ranges of %R&R acceptation guide line: <10% is acceptable, between 10% - 30% is marginal, and <30% is unacceptable. According to our results, the %R&R of the 1st PER test setup was 16.2% as marginality, and the 2nd PER test setup was 8.9% as acceptance. Thus, providing the better repeatability and reproducibility, this 2nd PER test setup having PMF coiled around a hot rod at constant 75°C with 6-cm diameter was selected for our next study of the impact of hot temperature on PER in LC/UPC PMF connector.
Chan, Mark; Chiang, Chi Leung; Lee, Venus; Cheung, Steven; Leung, Ronnie; Wong, Matthew; Lee, Frankle; Blanck, Oliver
2017-01-01
Aim of this study was to comparatively evaluate the accuracy of respiration-correlated (4D) and uncorrelated (3D) cone beam computed tomography (CBCT) in localizing lipiodolized hepatocellular carcinomas during stereotactic body radiotherapy (SBRT). 4D-CBCT scans of eighteen HCCs were acquired during free-breathing SBRT following trans-arterial chemo-embolization (TACE) with lipiodol. Approximately 1320 x-ray projections per 4D-CBCT were collected and phase-sorted into ten bins. A 4D registration workflow was followed to register the reconstructed time-weighted average CBCT with the planning mid-ventilation (MidV) CT by an initial bone registration of the vertebrae and then tissue registration of the lipiodol. For comparison, projections of each 4D-CBCT were combined to synthesize 3D-CBCT without phase-sorting. Using the lipiodolized tumor, uncertainties of the treatment setup estimated from the absolute and relative lipiodol position to bone were analyzed separately for 4D- and 3D-CBCT. Qualitatively, 3D-CBCT showed better lipiodol contrast than 4D-CBCT primarily because of a tenfold increase of projections used for reconstruction. Motion artifact was observed to subside in 4D-CBCT compared to 3D-CBCT. Group mean, systematic and random errors estimated from 4D- and 3D-CBCT agreed to within 1 mm in the cranio-caudal (CC) and 0.5 mm in the anterior-posterior (AP) and left-right (LR) directions. Systematic and random errors are largest in the CC direction, amounting to 4.7 mm and 3.7 mm from 3D-CBCT and 5.6 mm and 3.8 mm from 4D-CBCT, respectively. Safety margin calculated from 3D-CBCT and 4D-CBCT differed by 2.1, 0.1 and 0.0 mm in the CC, AP, and LR directions. 3D-CBCT is an adequate alternative to 4D-CBCT when lipoid is used for localizing HCC during free-breathing SBRT. Similar margins are anticipated with 3D- and 4D-CBCT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udrescu, Corina; Mornex, Francoise, E-mail: francoise.mornex@chu-lyon.fr; Tanguy, Ronan
2013-01-01
Purpose: The intrafraction verification provided by ExacTrac X-ray 6D Snap Verification (ET-SV) allows the tracking of potential isocenter displacements throughout patient position and treatment. The aims of this study were (1) to measure the intrafraction variations of the isocenter position (random errors); (2) to study the amplitude of the variation related to the fraction duration; and (3) to assess the impact of the table movement on positioning uncertainties. Methods and Materials: ET-SV uses images acquired before or during treatment delivery or both to detect isocenter displacement. Twenty patients treated with stereotactic body radiation therapy (SBRT) for lung tumors underwent SVmore » before or during each beam. Noncoplanar beams were sometimes necessary. The time between the setup of the patient and each SV was noted, and values of deviations were compiled for 3 SV time groups: SV performed at {<=}10 min (group 1), between 11 and 20 min (group 2), and {>=}21 min (group 3). Random errors in positioning during the use of noncoplanar fields were noted. Results: The mean isocenter deviation {+-}SD was 2 {+-} 0.5 mm (range, 1-8 mm). The average deviations {+-}SD increased significantly from 1.6 {+-} 0.5 mm to 2.1 {+-} 0.8 mm and 2.2 {+-} 0.6 mm for groups 1, 2, and 3 (P=.002), respectively. Percentages of deviation {>=}3 mm were 7.06%, 22.83%, and 28.07% and 1.08%, 4.15%, and 8.4% for {>=}5 mm (P<.0001). For 11 patients, table rotation was necessary. The mean isocenter deviation {+-}SD increased significantly from 1.9 {+-} 0.5 mm before table rotation to 2.7 {+-} 0.5 mm (P=.001) for the first beam treated after rotation. Conclusions: SV detects isocenter deviations, which increase in amplitude and frequency with the fraction duration, and enables intrafraction verification for SBRT (taking into account clinical condition and technical issues). SV gives accurate targeting at any time during irradiation and may raise confidence to escalate the dose. SV appears to be an important tool for ensuring the quality control of SBRT.« less
NASA Astrophysics Data System (ADS)
Semenov, Z. V.; Labusov, V. A.
2017-11-01
Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.
NASA Astrophysics Data System (ADS)
Sun, Hong; Wu, Qian-zhong
2013-09-01
In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.
Calibrating page sized Gafchromic EBT3 films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crijns, W.; Maes, F.; Heide, U. A. van der
2013-01-15
Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less
Quantum stopwatch: how to store time in a quantum memory.
Yang, Yuxiang; Chiribella, Giulio; Hayashi, Masahito
2018-05-01
Quantum mechanics imposes a fundamental trade-off between the accuracy of time measurements and the size of the systems used as clocks. When the measurements of different time intervals are combined, the errors due to the finite clock size accumulate, resulting in an overall inaccuracy that grows with the complexity of the set-up. Here, we introduce a method that, in principle, eludes the accumulation of errors by coherently transferring information from a quantum clock to a quantum memory of the smallest possible size. Our method could be used to measure the total duration of a sequence of events with enhanced accuracy, and to reduce the amount of quantum communication needed to stabilize clocks in a quantum network.
Calvo-Ortega, Juan-Francisco; Hermida-López, Marcelino; Moragues-Femenía, Sandra; Pozo-Massó, Miquel; Casals-Farran, Joan
2017-03-01
To evaluate the spatial accuracy of a frameless cone-beam computed tomography (CBCT)-guided cranial radiosurgery (SRS) using an end-to-end (E2E) phantom test methodology. Five clinical SRS plans were mapped to an acrylic phantom containing a radiochromic film. The resulting phantom-based plans (E2E plans) were delivered four times. The phantom was setup on the treatment table with intentional misalignments, and CBCT-imaging was used to align it prior to E2E plan delivery. Comparisons (global gamma analysis) of the planned and delivered dose to the film were performed using a commercial triple-channel film dosimetry software. The necessary distance-to-agreement to achieve a 95% (DTA95) gamma passing rate for a fixed 3% dose difference provided an estimate of the spatial accuracy of CBCT-guided SRS. Systematic (∑) and random (σ) error components, as well as 95% confidence levels were derived for the DTA95 metric. The overall systematic spatial accuracy averaged over all tests was 1.4mm (SD: 0.2mm), with a corresponding 95% confidence level of 1.8mm. The systematic (Σ) and random (σ) spatial components of the accuracy derived from the E2E tests were 0.2mm and 0.8mm, respectively. The E2E methodology used in this study allowed an estimation of the spatial accuracy of our CBCT-guided SRS procedure. Subsequently, a PTV margin of 2.0mm is currently used in our department. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Accuracy of UTE-MRI-based patient setup for brain cancer radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yingli; Cao, Minsong; Kaprealian, Tania
2016-01-15
Purpose: Radiation therapy simulations solely based on MRI have advantages compared to CT-based approaches. One feature readily available from computed tomography (CT) that would need to be reproduced with MR is the ability to compute digitally reconstructed radiographs (DRRs) for comparison against on-board radiographs commonly used for patient positioning. In this study, the authors generate MR-based bone images using a single ultrashort echo time (UTE) pulse sequence and quantify their 3D and 2D image registration accuracy to CT and radiographic images for treatments in the cranium. Methods: Seven brain cancer patients were scanned at 1.5 T using a radial UTEmore » sequence. The sequence acquired two images at two different echo times. The two images were processed using an in-house software to generate the UTE bone images. The resultant bone images were rigidly registered to simulation CT data and the registration error was determined using manually annotated landmarks as references. DRRs were created based on UTE-MRI and registered to simulated on-board images (OBIs) and actual clinical 2D oblique images from ExacTrac™. Results: UTE-MRI resulted in well visualized cranial, facial, and vertebral bones that quantitatively matched the bones in the CT images with geometric measurement errors of less than 1 mm. The registration error between DRRs generated from 3D UTE-MRI and the simulated 2D OBIs or the clinical oblique x-ray images was also less than 1 mm for all patients. Conclusions: UTE-MRI-based DRRs appear to be promising for daily patient setup of brain cancer radiotherapy with kV on-board imaging.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velec, Michael, E-mail: michael.velec@rmp.uhn.on.ca; Institute of Medical Science, University of Toronto, Toronto, ON; Moseley, Joanne L.
2012-07-15
Purpose: To investigate the accumulated dose deviations to tumors and normal tissues in liver stereotactic body radiotherapy (SBRT) and investigate their geometric causes. Methods and Materials: Thirty previously treated liver cancer patients were retrospectively evaluated. Stereotactic body radiotherapy was planned on the static exhale CT for 27-60 Gy in 6 fractions, and patients were treated in free-breathing with daily cone-beam CT guidance. Biomechanical model-based deformable image registration accumulated dose over both the planning four-dimensional (4D) CT (predicted breathing dose) and also over each fraction's respiratory-correlated cone-beam CT (accumulated treatment dose). The contribution of different geometric errors to changes between themore » accumulated and predicted breathing dose were quantified. Results: Twenty-one patients (70%) had accumulated dose deviations relative to the planned static prescription dose >5%, ranging from -15% to 5% in tumors and -42% to 8% in normal tissues. Sixteen patients (53%) still had deviations relative to the 4D CT-predicted dose, which were similar in magnitude. Thirty-two tissues in these 16 patients had deviations >5% relative to the 4D CT-predicted dose, and residual setup errors (n = 17) were most often the largest cause of the deviations, followed by deformations (n = 8) and breathing variations (n = 7). Conclusion: The majority of patients had accumulated dose deviations >5% relative to the static plan. Significant deviations relative to the predicted breathing dose still occurred in more than half the patients, commonly owing to residual setup errors. Accumulated SBRT dose may be warranted to pursue further dose escalation, adaptive SBRT, and aid in correlation with clinical outcomes.« less
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Using GPU parallelization to perform realistic simulations of the LPCTrap experiments
NASA Astrophysics Data System (ADS)
Fabian, X.; Mauger, F.; Quéméner, G.; Velten, Ph.; Ban, G.; Couratin, C.; Delahaye, P.; Durand, D.; Fabre, B.; Finlay, P.; Fléchard, X.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pons, B.; Porobic, T.; Severijns, N.; Thomas, J. C.
2015-11-01
The LPCTrap setup is a sensitive tool to measure the β - ν angular correlation coefficient, a β ν , which can yield the mixing ratio ρ of a β decay transition. The latter enables the extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V u d . In such a measurement, the most relevant observable is the energy distribution of the recoiling daughter nuclei following the nuclear β decay, which is obtained using a time-of-flight technique. In order to maximize the precision, one can reduce the systematic errors through a thorough simulation of the whole set-up, especially with a correct model of the trapped ion cloud. This paper presents such a simulation package and focuses on the ion cloud features; particular attention is therefore paid to realistic descriptions of trapping field dynamics, buffer gas cooling and the N-body space charge effects.
External cavity diode laser setup with two interference filters
NASA Astrophysics Data System (ADS)
Martin, Alexander; Baus, Patrick; Birkl, Gerhard
2016-12-01
We present an external cavity diode laser setup using two identical, commercially available interference filters operated in the blue wavelength range around 450 nm. The combination of the two filters decreases the transmission width, while increasing the edge steepness without a significant reduction in peak transmittance. Due to the broad spectral transmission of these interference filters compared to the internal mode spacing of blue laser diodes, an additional locking scheme, based on Hänsch-Couillaud locking to a cavity, has been added to improve the stability. The laser is stabilized to a line in the tellurium spectrum via saturation spectroscopy, and single-frequency operation for a duration of two days is demonstrated by monitoring the error signal of the lock and the piezo drive compensating the length change of the external resonator due to air pressure variations. Additionally, transmission curves of the filters and the spectra of a sample of diodes are given.
Physics, ballistics, and psychology: a history of the chronoscope in/as context, 1845-1890.
Schmidgen, Henning
2005-02-01
In Wilhelm Wundt's (1832-1920) Leipzig laboratory and at numerous other research sites, the chronoscope was used to conduct reaction time experiments. The author argues that the history of the chronoscope is the history not of an instrument but of an experimental setup. This setup was initially devised by the English physicist and instrument maker Charles Wheatstone (1802-1875) in the early 1840s. Shortly thereafter, it was improved by the German clockmaker and mechanic Matthäus Hipp (1813-1893). In the 1850s, the chronoscope was introduced to ballistic research. In the early 1860s, Neuchâtel astronomer Adolphe Hirsch (1830-1901) applied it to the problem of physiological time. The extensions and variations of chronoscope use within the contexts of ballistics, physiology, and psychology presented special challenges. These challenges were met with specific attempts to reduce the errors in chronoscopic experiments on shooting stands and in the psychological laboratory.
The Adiabatic Theorem and Linear Response Theory for Extended Quantum Systems
NASA Astrophysics Data System (ADS)
Bachmann, Sven; De Roeck, Wojciech; Fraas, Martin
2018-03-01
The adiabatic theorem refers to a setup where an evolution equation contains a time-dependent parameter whose change is very slow, measured by a vanishing parameter ɛ. Under suitable assumptions the solution of the time-inhomogenous equation stays close to an instantaneous fixpoint. In the present paper, we prove an adiabatic theorem with an error bound that is independent of the number of degrees of freedom. Our setup is that of quantum spin systems where the manifold of ground states is separated from the rest of the spectrum by a spectral gap. One important application is the proof of the validity of linear response theory for such extended, genuinely interacting systems. In general, this is a long-standing mathematical problem, which can be solved in the present particular case of a gapped system, relevant e.g. for the integer quantum Hall effect.
Magnetostriction measurement by four probe method
NASA Astrophysics Data System (ADS)
Dange, S. N.; Radha, S.
2018-04-01
The present paper describes the design and setting up of an indigenouslydevelopedmagnetostriction(MS) measurement setup using four probe method atroom temperature.A standard strain gauge is pasted with a special glue on the sample and its change in resistance with applied magnetic field is measured using KeithleyNanovoltmeter and Current source. An electromagnet with field upto 1.2 tesla is used to source the magnetic field. The sample is placed between the magnet poles using self designed and developed wooden probe stand, capable of moving in three mutually perpendicular directions. The nanovoltmeter and current source are interfaced with PC using RS232 serial interface. A software has been developed in for logging and processing of data. Proper optimization of measurement has been done through software to reduce the noise due to thermal emf and electromagnetic induction. The data acquired for some standard magnetic samples are presented. The sensitivity of the setup is 1microstrain with an error in measurement upto 5%.
NASA Astrophysics Data System (ADS)
Liu, Yonghuai; Rodrigues, Marcos A.
2000-03-01
This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).
Does Mckuer's Law Hold for Heart Rate Control via Biofeedback Display?
NASA Technical Reports Server (NTRS)
Courter, B. J.; Jex, H. R.
1984-01-01
Some persons can control their pulse rate with the aid of a biofeedback display. If the biofeedback display is modified to show the error between a command pulse-rate and the measured rate, a compensatory (error correcting) heart rate tracking control loop can be created. The dynamic response characteristics of this control loop when subjected to step and quasi-random disturbances were measured. The control loop includes a beat-to-beat cardiotachmeter differenced with a forcing function from a quasi-random input generator; the resulting error pulse-rate is displayed as feedback. The subject acts to null the displayed pulse-rate error, thereby closing a compensatory control loop. McRuer's Law should hold for this case. A few subjects already skilled in voluntary pulse-rate control were tested for heart-rate control response. Control-law properties are derived, such as: crossover frequency, stability margins, and closed-loop bandwidth. These are evaluated for a range of forcing functions and for step as well as random disturbances.
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Yago, Martín
2017-05-01
QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.
Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.
Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J
2017-12-01
Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.
Li, Ruijiang; Fahimian, Benjamin P; Xing, Lei
2011-07-01
Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a "plug-and-play" fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not statistically significant. The proposed algorithm eliminates the need for any population based model parameters in monoscopic image guided radiotherapy and allows accurate and real-time 3D tumor localization on current standard LINACs with a single x-ray imager.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-10-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-03-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Statistical process control analysis for patient-specific IMRT and VMAT QA.
Sanghangthum, Taweap; Suriyapee, Sivalee; Srisatit, Somyot; Pawlicki, Todd
2013-05-01
This work applied statistical process control to establish the control limits of the % gamma pass of patient-specific intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) quality assurance (QA), and to evaluate the efficiency of the QA process by using the process capability index (Cpml). A total of 278 IMRT QA plans in nasopharyngeal carcinoma were measured with MapCHECK, while 159 VMAT QA plans were undertaken with ArcCHECK. Six megavolts with nine fields were used for the IMRT plan and 2.5 arcs were used to generate the VMAT plans. The gamma (3%/3 mm) criteria were used to evaluate the QA plans. The % gamma passes were plotted on a control chart. The first 50 data points were employed to calculate the control limits. The Cpml was calculated to evaluate the capability of the IMRT/VMAT QA process. The results showed higher systematic errors in IMRT QA than VMAT QA due to the more complicated setup used in IMRT QA. The variation of random errors was also larger in IMRT QA than VMAT QA because the VMAT plan has more continuity of dose distribution. The average % gamma pass was 93.7% ± 3.7% for IMRT and 96.7% ± 2.2% for VMAT. The Cpml value of IMRT QA was 1.60 and VMAT QA was 1.99, which implied that the VMAT QA process was more accurate than the IMRT QA process. Our lower control limit for % gamma pass of IMRT is 85.0%, while the limit for VMAT is 90%. Both the IMRT and VMAT QA processes are good quality because Cpml values are higher than 1.0.
Fresnel diffraction by spherical obstacles
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.
1989-01-01
Lommel functions were used to solve the Fresnel-Kirchhoff diffraction integral for the case of a spherical obstacle. Comparisons were made between Fresnel diffraction theory and Mie scattering theory. Fresnel theory is then compared to experimental data. Experiment and theory typically deviated from one another by less than 10 percent. A unique experimental setup using mercury spheres suspended in a viscous fluid significantly reduced optical noise. The major source of error was due to the Gaussian-shaped laser beam.
User's guide to Monte Carlo methods for evaluating path integrals
NASA Astrophysics Data System (ADS)
Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan
2018-04-01
We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.
Vertical high-precision Michelson wavemeter
NASA Astrophysics Data System (ADS)
Morales, A.; de Urquijo, J.; Mendoza, A.
1993-01-01
We have designed and tested a traveling, Michelson-type vertical wavemeter for the wavelength measurement of tunable continuous-wave lasers in the visible part of the spectrum. The interferometer has two movable corner cubes, suspending vertically from a driving setup resembling Atwood's machine. To reduce the fraction-of-fringe error, a vernier-type coincidence circuit was used. Although simple, this wavemeter has a relative precision of 3.2 parts in 109 for an overall fringe count of about 7×106.
Wu, Jian; Murphy, Martin J
2010-06-01
To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.
Classification of echolocation clicks from odontocetes in the Southern California Bight.
Roch, Marie A; Klinck, Holger; Baumann-Pickering, Simone; Mellinger, David K; Qui, Simon; Soldevilla, Melissa S; Hildebrand, John A
2011-01-01
This study presents a system for classifying echolocation clicks of six species of odontocetes in the Southern California Bight: Visually confirmed bottlenose dolphins, short- and long-beaked common dolphins, Pacific white-sided dolphins, Risso's dolphins, and presumed Cuvier's beaked whales. Echolocation clicks are represented by cepstral feature vectors that are classified by Gaussian mixture models. A randomized cross-validation experiment is designed to provide conditions similar to those found in a field-deployed system. To prevent matched conditions from inappropriately lowering the error rate, echolocation clicks associated with a single sighting are never split across the training and test data. Sightings are randomly permuted before assignment to folds in the experiment. This allows different combinations of the training and test data to be used while keeping data from each sighting entirely in the training or test set. The system achieves a mean error rate of 22% across 100 randomized three-fold cross-validation experiments. Four of the six species had mean error rates lower than the overall mean, with the presumed Cuvier's beaked whale clicks showing the best performance (<2% error rate). Long-beaked common and bottlenose dolphins proved the most difficult to classify, with mean error rates of 53% and 68%, respectively.
Effects of random tooth profile errors on the dynamic behaviors of planetary gears
NASA Astrophysics Data System (ADS)
Xun, Chao; Long, Xinhua; Hua, Hongxing
2018-02-01
In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes
Andrew D. Richardson; David Y. Hollinger; George G. Burba; Kenneth J. Davis; Lawrence B. Flanagan; Gabriel G. Katul; J. William Munger; Daniel M. Ricciuto; Paul C. Stoy; Andrew E. Suyker; Shashi B. Verma; Steven C. Wofsy; Steven C. Wofsy
2006-01-01
Measured surface-atmosphere fluxes of energy (sensible heat, H, and latent heat, LE) and CO2 (FCO2) represent the ``true?? flux plus or minus potential random and systematic measurement errors. Here, we use data from seven sites in the AmeriFlux network, including five forested sites (two of which include ``tall tower?? instrumentation), one grassland site, and one...
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
NASA Technical Reports Server (NTRS)
Kwon, Jin H.; Lee, Ja H.
1989-01-01
The far-field beam pattern and the power-collection efficiency are calculated for a multistage laser-diode-array amplifier consisting of about 200,000 5-W laser diode arrays with random distributions of phase and orientation errors and random diode failures. From the numerical calculation it is found that the far-field beam pattern is little affected by random failures of up to 20 percent of the laser diodes with reference of 80 percent receiving efficiency in the center spot. The random differences in phases among laser diodes due to probable manufacturing errors is allowed to about 0.2 times the wavelength. The maximum allowable orientation error is about 20 percent of the diffraction angle of a single laser diode aperture (about 1 cm). The preliminary results indicate that the amplifier could be used for space beam-power transmission with an efficiency of about 80 percent for a moderate-size (3-m-diameter) receiver placed at a distance of less than 50,000 km.
An Analysis of Computational Errors in the Use of Division Algorithms by Fourth-Grade Students.
ERIC Educational Resources Information Center
Stefanich, Greg P.; Rokusek, Teri
1992-01-01
Presents a study that analyzed errors made by randomly chosen fourth grade students (25 of 57) while using the division algorithm and investigated the effect of remediation on identified systematic errors. Results affirm that error pattern diagnosis and directed remediation lead to new learning and long-term retention. (MDH)
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
An automatic dose verification system for adaptive radiotherapy for helical tomotherapy
NASA Astrophysics Data System (ADS)
Mo, Xiaohu; Chen, Mingli; Parnell, Donald; Olivera, Gustavo; Galmarini, Daniel; Lu, Weiguo
2014-03-01
Purpose: During a typical 5-7 week treatment of external beam radiotherapy, there are potential differences between planned patient's anatomy and positioning, such as patient weight loss, or treatment setup. The discrepancies between planned and delivered doses resulting from these differences could be significant, especially in IMRT where dose distributions tightly conforms to target volumes while avoiding organs-at-risk. We developed an automatic system to monitor delivered dose using daily imaging. Methods: For each treatment, a merged image is generated by registering the daily pre-treatment setup image and planning CT using treatment position information extracted from the Tomotherapy archive. The treatment dose is then computed on this merged image using our in-house convolution-superposition based dose calculator implemented on GPU. The deformation field between merged and planning CT is computed using the Morphon algorithm. The planning structures and treatment doses are subsequently warped for analysis and dose accumulation. All results are saved in DICOM format with private tags and organized in a database. Due to the overwhelming amount of information generated, a customizable tolerance system is used to flag potential treatment errors or significant anatomical changes. A web-based system and a DICOM-RT viewer were developed for reporting and reviewing the results. Results: More than 30 patients were analysed retrospectively. Our in-house dose calculator passed 97% gamma test evaluated with 2% dose difference and 2mm distance-to-agreement compared with Tomotherapy calculated dose, which is considered sufficient for adaptive radiotherapy purposes. Evaluation of the deformable registration through visual inspection showed acceptable and consistent results, except for cases with large or unrealistic deformation. Our automatic flagging system was able to catch significant patient setup errors or anatomical changes. Conclusions: We developed an automatic dose verification system that quantifies treatment doses, and provides necessary information for adaptive planning without impeding clinical workflows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueda, Yoshihiro, E-mail: ueda-yo@mc.pref.osaka.jp; Miyazaki, Masayoshi; Nishiyama, Kinji
2012-07-01
Purpose: To evaluate setup error and interfractional changes in tumor motion magnitude using an electric portal imaging device in cine mode (EPID cine) during the course of stereotactic body radiation therapy (SBRT) for non-small-cell lung cancer (NSCLC) and to calculate margins to compensate for these variations. Materials and Methods: Subjects were 28 patients with Stage I NSCLC who underwent SBRT. Respiratory-correlated four-dimensional computed tomography (4D-CT) at simulation was binned into 10 respiratory phases, which provided average intensity projection CT data sets (AIP). On 4D-CT, peak-to-peak motion of the tumor (M-4DCT) in the craniocaudal direction was assessed and the tumor centermore » (mean tumor position [MTP]) of the AIP (MTP-4DCT) was determined. At treatment, the tumor on cone beam CT was registered to that on AIP for patient setup. During three sessions of irradiation, peak-to-peak motion of the tumor (M-cine) and the mean tumor position (MTP-cine) were obtained using EPID cine and in-house software. Based on changes in tumor motion magnitude ( Increment M) and patient setup error ( Increment MTP), defined as differences between M-4DCT and M-cine and between MTP-4DCT and MTP-cine, a margin to compensate for these variations was calculated with Stroom's formula. Results: The means ({+-}standard deviation: SD) of M-4DCT and M-cine were 3.1 ({+-}3.4) and 4.0 ({+-}3.6) mm, respectively. The means ({+-}SD) of Increment M and Increment MTP were 0.9 ({+-}1.3) and 0.2 ({+-}2.4) mm, respectively. Internal target volume-planning target volume (ITV-PTV) margins to compensate for Increment M, Increment MTP, and both combined were 3.7, 5.2, and 6.4 mm, respectively. Conclusion: EPID cine is a useful modality for assessing interfractional variations of tumor motion. The ITV-PTV margins to compensate for these variations can be calculated.« less
SU-F-T-463: Light-Field Based Dynalog Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atwal, P; Ramaseshan, R
2016-06-15
Purpose: To independently verify leaf positions in so-called dynalog files for a Varian iX linac with a Millennium 120 MLC. This verification provides a measure of confidence that the files can be used directly as part of a more extensive intensity modulated radiation therapy / volumetric modulated arc therapy QA program. Methods: Initial testing used white paper placed at the collimator plane and a standard hand-held digital camera to image the light and shadow of a static MLC field through the paper. Known markings on the paper allow for image calibration. Noise reduction was attempted with removal of ‘inherent noise’more » from an open-field light image through the paper, but the method was found to be inconsequential. This is likely because the environment could not be controlled to the precision required for the sort of reproducible characterization of the quantum noise needed in order to meaningfully characterize and account for it. A multi-scale iterative edge detection algorithm was used for localizing the leaf ends. These were compared with the planned locations from the treatment console. Results: With a very basic setup, the image of the central bank A leaves 15–45, which are arguably the most important for beam modulation, differed from the planned location by [0.38±0.28] mm. Similarly, for bank B leaves 15–45 had a difference of [0.42±0.28] mm Conclusion: It should be possible to determine leaf position accurately with not much more than a modern hand-held camera and some software. This means we can have a periodic and independent verification of the dynalog file information. This is indicated by the precision already achieved using a basic setup and analysis methodology. Currently, work is being done to reduce imaging and setup errors, which will bring the leaf position error down further, and allow meaningful analysis over the full range of leaves.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balik, S; Weiss, E; Sleeman, W
Purpose: To evaluate the potential impact of several setup error correction strategies on a proposed image-guided adaptive radiotherapy strategy for locally advanced lung cancer. Methods: Daily 4D cone-beam CT and weekly 4D fan-beam CT images were acquired from 9 lung cancer patients undergoing concurrent chemoradiation therapy. Initial planning CT was deformably registered to daily CBCT images to generate synthetic treatment courses. An adaptive radiation therapy course was simulated using the weekly CT images with replanning twice and a hypofractionated, simultaneous integrated boost to a total dose of 66 Gy to the original PTV and either a 66 Gy (no boost)more » or 82 Gy (boost) dose to the boost PTV (ITV + 3mm) in 33 fractions with IMRT or VMAT. Lymph nodes (LN) were not boosted (prescribed to 66 Gy in both plans). Synthetic images were rigidly, bony (BN) or tumor and carina (TC), registered to the corresponding plan CT, dose was computed on these from adaptive replans (PLAN) and deformably accumulated back to the original planning CT. Cumulative D98% of CTV of PT (ITV for 82Gy) and LN, and normal tissue dose changes were analyzed. Results: Two patients were removed from the study due to large registration errors. For the remaining 7 patients, D98% for CTV-PT (ITV-PT for 82 Gy) and CTV-LN was within 1 Gy of PLAN for both 66 Gy and 82 Gy plans with both setup techniques. Overall, TC based setup provided better results, especially for LN coverage (p = 0.1 for 66Gy plan and p = 0.2 for 82 Gy plan, comparison of BN and TC), though not significant. Normal tissue dose constraints violated for some patients if constraint was barely achieved in PLAN. Conclusion: The hypofractionated adaptive strategy appears to be deliverable with soft tissue alignment for the evaluated margins and planning parameters. Research was supported by NIH P01CA116602.« less
Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.
Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok
2015-01-01
Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.
Physical layer one-time-pad data encryption through synchronized semiconductor laser networks
NASA Astrophysics Data System (ADS)
Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris
2016-02-01
Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Liquid Medication Dosing Errors by Hispanic Parents: Role of Health Literacy and English Proficiency
Harris, Leslie M.; Dreyer, Benard; Mendelsohn, Alan; Bailey, Stacy C.; Sanders, Lee M.; Wolf, Michael S.; Parker, Ruth M.; Patel, Deesha A.; Kim, Kwang Youn A.; Jimenez, Jessica J.; Jacobson, Kara; Smith, Michelle; Yin, H. Shonna
2016-01-01
Objective Hispanic parents in the US are disproportionately affected by low health literacy and limited English proficiency (LEP). We examined associations between health literacy, LEP, and liquid medication dosing errors in Hispanic parents. Methods Cross-sectional analysis of data from a multisite randomized controlled experiment to identify best practices for the labeling/dosing of pediatric liquid medications (SAFE Rx for Kids study); 3 urban pediatric clinics. Analyses were limited to Hispanic parents of children <8 years, with health literacy and LEP data (n=1126). Parents were randomized to 5 groups that varied by pairing of units of measurement on the label/dosing tool. Each parent measured 9 doses [3 amounts (2.5,5,7.5 mL) using 3 tools (2 syringes (0.2,0.5 mL increment), 1 cup)] in random order. Dependent variable: Dosing error=>20% dose deviation. Predictor variables: health literacy (Newest Vital Sign) [limited=0–3; adequate=4–6], LEP (speaks English less than “very well”). Results 83.1% made dosing errors (mean(SD) errors/parent=2.2(1.9)). Parents with limited health literacy and LEP had the greatest odds of making a dosing error compared to parents with adequate health literacy who were English proficient (% trials with errors/parent=28.8 vs. 12.9%; AOR=2.2[1.7–2.8]). Parents with limited health literacy who were English proficient were also more likely to make errors (% trials with errors/parent=18.8%; AOR=1.4[1.1–1.9]). Conclusion Dosing errors are common among Hispanic parents; those with both LEP and limited health literacy are at particular risk. Further study is needed to examine how the redesign of medication labels and dosing tools could reduce literacy and language-associated disparities in dosing errors. PMID:28477800
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Effects of learning climate and registered nurse staffing on medication errors.
Chang, Yunkyung; Mark, Barbara
2011-01-01
Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.
The (mis)reporting of statistical results in psychology journals.
Bakker, Marjan; Wicherts, Jelte M
2011-09-01
In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Pricing Employee Stock Options (ESOs) with Random Lattice
NASA Astrophysics Data System (ADS)
Chendra, E.; Chin, L.; Sukmana, A.
2018-04-01
Employee Stock Options (ESOs) are stock options granted by companies to their employees. Unlike standard options that can be traded by typical institutional or individual investors, employees cannot sell or transfer their ESOs to other investors. The sale restrictions may induce the ESO’s holder to exercise them earlier. In much cited paper, Hull and White propose a binomial lattice in valuing ESOs which assumes that employees will exercise voluntarily their ESOs if the stock price reaches a horizontal psychological barrier. Due to nonlinearity errors, the numerical pricing results oscillate significantly so they may lead to large pricing errors. In this paper, we use the random lattice method to price the Hull-White ESOs model. This method can reduce the nonlinearity error by aligning a layer of nodes of the random lattice with a psychological barrier.
The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.
Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik
2014-11-11
Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.
Phase Retrieval System for Assessing Diamond Turning and Optical Surface Defects
NASA Technical Reports Server (NTRS)
Dean, Bruce; Maldonado, Alex; Bolcar, Matthew
2011-01-01
An optical design is presented for a measurement system used to assess the impact of surface errors originating from diamond turning artifacts. Diamond turning artifacts are common by-products of optical surface shaping using the diamond turning process (a diamond-tipped cutting tool used in a lathe configuration). Assessing and evaluating the errors imparted by diamond turning (including other surface errors attributed to optical manufacturing techniques) can be problematic and generally requires the use of an optical interferometer. Commercial interferometers can be expensive when compared to the simple optical setup developed here, which is used in combination with an image-based sensing technique (phase retrieval). Phase retrieval is a general term used in optics to describe the estimation of optical imperfections or aberrations. This turnkey system uses only image-based data and has minimal hardware requirements. The system is straightforward to set up, easy to align, and can provide nanometer accuracy on the measurement of optical surface defects.
NASA Astrophysics Data System (ADS)
Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang
2018-02-01
Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.
Writing executable assertions to test flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.
Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.
Minin, Serge; Kamalabadi, Farzad
2009-12-20
We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Flight-deck automation - Promises and problems
NASA Technical Reports Server (NTRS)
Wiener, E. L.; Curry, R. E.
1980-01-01
The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment
NASA Astrophysics Data System (ADS)
Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.
2016-11-01
This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.
NASA Astrophysics Data System (ADS)
Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin
2018-01-01
ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.
Olofsson, Madelen A; Bylund, Dan
2015-10-01
A liquid chromatography with electrospray ionization mass spectrometry method was developed to quantitatively and qualitatively analyze 13 hydroxamate siderophores (ferrichrome, ferrirubin, ferrirhodin, ferrichrysin, ferricrocin, ferrioxamine B, D1 , E and G, neocoprogen I and II, coprogen and triacetylfusarinine C). Samples were preconcentrated on-line by a switch-valve setup prior to analyte separation on a Kinetex C18 column. Gradient elution was performed using a mixture of an ammonium formate buffer and acetonitrile. Total analysis time including column conditioning was 20.5 min. Analytes were fragmented by applying collision-induced dissociation, enabling structural identification by tandem mass spectrometry. Limit of detection values for the selected ion monitoring method ranged from 71 pM to 1.5 nM with corresponding values of two to nine times higher for the multiple reaction monitoring method. The liquid chromatography with mass spectrometry method resulted in a robust and sensitive quantification of hydroxamate siderophores as indicated by retention time stability, linearity, sensitivity, precision and recovery. The analytical error of the methods, assessed through random-order, duplicate analysis of soil samples extracted with a mixture of 10 mM phosphate buffer and methanol, appears negligible in relation to between-sample variations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Blaya, J A; Shin, S S; Yale, G; Suarez, C; Asencios, L; Contreras, C; Rodriguez, P; Kim, J; Cegielski, P; Fraser, H S F
2010-08-01
To evaluate the impact of the e-Chasqui laboratory information system in reducing reporting errors compared to the current paper system. Cluster randomized controlled trial in 76 health centers (HCs) between 2004 and 2008. Baseline data were collected every 4 months for 12 months. HCs were then randomly assigned to intervention (e-Chasqui) or control (paper). Further data were collected for the same months the following year. Comparisons were made between intervention and control HCs, and before and after the intervention. Intervention HCs had respectively 82% and 87% fewer errors in reporting results for drug susceptibility tests (2.1% vs. 11.9%, P = 0.001, OR 0.17, 95%CI 0.09-0.31) and cultures (2.0% vs. 15.1%, P < 0.001, OR 0.13, 95%CI 0.07-0.24), than control HCs. Preventing missing results through online viewing accounted for at least 72% of all errors. e-Chasqui users sent on average three electronic error reports per week to the laboratories. e-Chasqui reduced the number of missing laboratory results at point-of-care health centers. Clinical users confirmed viewing electronic results not available on paper. Reporting errors to the laboratory using e-Chasqui promoted continuous quality improvement. The e-Chasqui laboratory information system is an important part of laboratory infrastructure improvements to support multidrug-resistant tuberculosis care in Peru.
Robust adaptive uniform exact tracking control for uncertain Euler-Lagrange system
NASA Astrophysics Data System (ADS)
Yang, Yana; Hua, Changchun; Li, Junpeng; Guan, Xinping
2017-12-01
This paper offers a solution to the robust adaptive uniform exact tracking control for uncertain nonlinear Euler-Lagrange (EL) system. An adaptive finite-time tracking control algorithm is designed by proposing a novel nonsingular integral terminal sliding-mode surface. Moreover, a new adaptive parameter tuning law is also developed by making good use of the system tracking errors and the adaptive parameter estimation errors. Thus, both the trajectory tracking and the parameter estimation can be achieved in a guaranteed time adjusted arbitrarily based on practical demands, simultaneously. Additionally, the control result for the EL system proposed in this paper can be extended to high-order nonlinear systems easily. Finally, a test-bed 2-DOF robot arm is set-up to demonstrate the performance of the new control algorithm.
Lateral velocity estimation bias due to beamforming delay errors (Conference Presentation)
NASA Astrophysics Data System (ADS)
Rodriguez-Molares, Alfonso; Fadnes, Solveig; Swillens, Abigail; Løvstakken, Lasse
2017-03-01
An artefact has recently been reported [1,2] in the estimation of the lateral blood velocity using speckle tracking. This artefact shows as a net velocity bias in presence of strong spatial velocity gradients such as those that occur at the edges of the filling jets in the heart. Even though this artifact has been found both in vitro and in simulated data, its causes are still undescribed. Here we demonstrate that a potential source of this artefact can be traced to smaller errors in the beamforming setup. By inserting a small offset in the beamforming delay, one can artificially create a net lateral movement in the speckle in areas of high velocity gradient. That offset does not have a strong impact in the image quality and can easily go undetected.
[Exploration of the concept of genetic drift in genetics teaching of undergraduates].
Wang, Chun-ming
2016-01-01
Genetic drift is one of the difficulties in teaching genetics due to its randomness and probability which could easily cause conceptual misunderstanding. The “sampling error" in its definition is often misunderstood because of the research method of “sampling", which disturbs the results and causes the random changes in allele frequency. I analyzed and compared the definitions of genetic drift in domestic and international genetic textbooks, and found that the definitions containing “sampling error" are widely adopted but are interpreted correctly in only a few textbooks. Here, the history of research on genetic drift, i.e., the contributions of Wright, Fisher and Kimura, is introduced. Moreover, I particularly describe two representative articles recently published about genetic drift teaching of undergraduates, which point out that misconceptions are inevitable for undergraduates during the studying process and also provide a preliminary solution. Combined with my own teaching practice, I suggest that the definition of genetic drift containing “sampling error" can be adopted with further interpretation, i.e., “sampling error" is random sampling among gametes when generating the next generation of alleles which is equivalent to a random sampling of all gametes participating in mating in gamete pool and has no relationship with artificial sampling in general genetics studies. This article may provide some help in genetics teaching.
Edgren, Gustaf; Anderson, Jacqueline; Dolk, Anders; Torgerson, Jarl; Nyberg, Svante; Skau, Tommy; Forsberg, Birger C; Werr, Joachim; Öhlen, Gunnar
2016-10-01
A small group of frequent visitors to Emergency Departments accounts for a disproportionally large fraction of healthcare consumption including unplanned hospitalizations and overall healthcare costs. In response, several case and disease management programs aimed at reducing healthcare consumption in this group have been tested; however, results vary widely. To investigate whether a telephone-based, nurse-led case management intervention can reduce healthcare consumption for frequent Emergency Department visitors in a large-scale setup. A total of 12 181 frequent Emergency Department users in three counties in Sweden were randomized using Zelen's design or a traditional randomized design to receive either a nurse-led case management intervention or no intervention, and were followed for healthcare consumption for up to 2 years. The traditional design showed an overall 12% (95% confidence interval 4-19%) decreased rate of hospitalization, which was mostly driven by effects in the last year. Similar results were achieved in the Zelen studies, with a significant reduction in hospitalization in the last year, but mixed results in the early development of the project. Our study provides evidence that a carefully designed telephone-based intervention with accurate and systematic patient selection and appropriate staff training in a centralized setup can lead to significant decreases in healthcare consumption and costs. Further, our results also show that the effects are sensitive to the delivery model chosen.
Technological Advancements and Error Rates in Radiation Therapy Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Sen; Li, Guangjun; Wang, Maojie
The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Sivanandy, Palanisamy; Maharajan, Mari Kannan; Rajiah, Kingston; Wei, Tan Tyng; Loon, Tan Wee; Yee, Lim Chong
2016-01-01
Background Patient safety is a major public health issue, and the knowledge, skills, and experience of health professionals are very much essential for improving patient safety. Patient safety and medication error are very much associated. Pharmacists play a significant role in patient safety. The function of pharmacists in the medication use process is very different from medical and nursing colleagues. Medication dispensing accuracy is a vital element to ensure the safety and quality of medication use. Objective To evaluate the attitude and perception of the pharmacist toward patient safety in retail pharmacies setup in Malaysia. Methods A Pharmacy Survey on Patient Safety Culture questionnaire was used to assess patient safety culture, developed by the Agency for Healthcare Research and Quality, and the convenience sampling method was adopted. Results The overall positive response rate ranged from 31.20% to 87.43%, and the average positive response rate was found to be 67%. Among all the eleven domains pertaining to patient safety culture, the scores of “staff training and skills” were less. Communication openness, and patient counseling are common, but not practiced regularly in the Malaysian retail pharmacy setup compared with those in USA. The overall perception of patient safety of an acceptable level in the current retail pharmacy setup. Conclusion The study revealed that staff training, skills, communication in patient counseling, and communication across shifts and about mistakes are less in current retail pharmacy setup. The overall perception of patient safety should be improved by educating the pharmacists about the significance and essential of patient safety. PMID:27524887
Kenney, Laurence P; Heller, Ben W; Barker, Anthony T; Reeves, Mark L; Healey, Jamie; Good, Timothy R; Cooper, Glen; Sha, Ning; Prenton, Sarah; Liu, Anmin; Howard, David
2016-11-01
Functional electrical stimulation has been shown to be a safe and effective means of correcting foot drop of central neurological origin. Current surface-based devices typically consist of a single channel stimulator, a sensor for determining gait phase and a cuff, within which is housed the anode and cathode. The cuff-mounted electrode design reduces the likelihood of large errors in electrode placement, but the user is still fully responsible for selecting the correct stimulation level each time the system is donned. Researchers have investigated different approaches to automating aspects of setup and/or use, including recent promising work based on iterative learning techniques. This paper reports on the design and clinical evaluation of an electrode array-based FES system for the correction of drop foot, ShefStim. The paper reviews the design process from proof of concept lab-based study, through modelling of the array geometry and interface layer to array search algorithm development. Finally, the paper summarises two clinical studies involving patients with drop foot. The results suggest that the ShefStim system with automated setup produces results which are comparable with clinician setup of conventional systems. Further, the final study demonstrated that patients can use the system without clinical supervision. When used unsupervised, setup time was 14min (9min for automated search plus 5min for donning the equipment), although this figure could be reduced significantly with relatively minor changes to the design. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Sivanandy, Palanisamy; Maharajan, Mari Kannan; Rajiah, Kingston; Wei, Tan Tyng; Loon, Tan Wee; Yee, Lim Chong
2016-01-01
Patient safety is a major public health issue, and the knowledge, skills, and experience of health professionals are very much essential for improving patient safety. Patient safety and medication error are very much associated. Pharmacists play a significant role in patient safety. The function of pharmacists in the medication use process is very different from medical and nursing colleagues. Medication dispensing accuracy is a vital element to ensure the safety and quality of medication use. To evaluate the attitude and perception of the pharmacist toward patient safety in retail pharmacies setup in Malaysia. A Pharmacy Survey on Patient Safety Culture questionnaire was used to assess patient safety culture, developed by the Agency for Healthcare Research and Quality, and the convenience sampling method was adopted. The overall positive response rate ranged from 31.20% to 87.43%, and the average positive response rate was found to be 67%. Among all the eleven domains pertaining to patient safety culture, the scores of "staff training and skills" were less. Communication openness, and patient counseling are common, but not practiced regularly in the Malaysian retail pharmacy setup compared with those in USA. The overall perception of patient safety of an acceptable level in the current retail pharmacy setup. The study revealed that staff training, skills, communication in patient counseling, and communication across shifts and about mistakes are less in current retail pharmacy setup. The overall perception of patient safety should be improved by educating the pharmacists about the significance and essential of patient safety.
Torralba, Marta; Díaz-Pérez, Lucía C.
2017-01-01
This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239
Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L
2015-07-07
Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.
Rosetta Navigation at its Mars Swing-By
NASA Technical Reports Server (NTRS)
Budnik, Frank; Morley, Trevor
2007-01-01
This paper reports on the navigation activities during Rosetta s Mars swing-by. It covers the Mars approach phase starting after a deterministic deep-space maneuver in September 2006, the swing-by proper on 25 February 2007, and ends with another deterministic deep-space maneuver in April 2007 which was also foreseen to compensate any navigation error. Emphasis is put on the orbit determination and prediction set-up and the evolution of the targeting estimates in the B-plane and their adjustments by trajectory correction maneuvers.
2007-09-01
broken wave would recover if it were to propagate in a constant water depth and ,b=Hbreak/ Hrms . The height of wave breaking Hbr ak is found from a...varies slightly for individual data records, the histograms of best-fit y are not significantly different for any of the error metrics (Figure 2- 4C ...i.e., for Hrm ,,o> 2 in), ij will become more important to the local water depth for h < I m. 3.4.2 Additional Terms 3.4.2.1 Summary Including bottom
Quantum key distribution with passive decoy state selection
NASA Astrophysics Data System (ADS)
Mauerer, Wolfgang; Silberhorn, Christine
2007-05-01
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Physical-Layer Network Coding for VPN in TDM-PON
NASA Astrophysics Data System (ADS)
Wang, Qike; Tse, Kam-Hon; Chen, Lian-Kuan; Liew, Soung-Chang
2012-12-01
We experimentally demonstrate a novel optical physical-layer network coding (PNC) scheme over time-division multiplexing (TDM) passive optical network (PON). Full-duplex error-free communications between optical network units (ONUs) at 2.5 Gb/s are shown for all-optical virtual private network (VPN) applications. Compared to the conventional half-duplex communications set-up, our scheme can increase the capacity by 100% with power penalty smaller than 3 dB. Synchronization of two ONUs is not required for the proposed VPN scheme
Center of mass perception and inertial frames of reference.
Bingham, G P; Muchisky, M M
1993-11-01
Center of mass perception was investigated by varying the shape, size, and orientation of planar objects. Shape was manipulated to investigate symmetries as information. The number of reflective symmetry axes, the amount of rotational symmetry, and the presence of radial symmetry were varied. Orientation affected systematic errors. Judgments tended to undershoot the center of mass. Random errors increased with size and decreased with symmetry. Size had no effect on random errors for maximally symmetric objects, although orientation did. The spatial distributions of judgments were elliptical. Distribution axes were found to align with the principle moments of inertia. Major axes tended to align with gravity in maximally symmetric objects. A functional and physical account was given in terms of the repercussions of error. Overall, judgments were very accurate.
NASA Astrophysics Data System (ADS)
Brask, Jonatan Bohr; Martin, Anthony; Esposito, William; Houlmann, Raphael; Bowles, Joseph; Zbinden, Hugo; Brunner, Nicolas
2017-05-01
An approach to quantum random number generation based on unambiguous quantum state discrimination is developed. We consider a prepare-and-measure protocol, where two nonorthogonal quantum states can be prepared, and a measurement device aims at unambiguously discriminating between them. Because the states are nonorthogonal, this necessarily leads to a minimal rate of inconclusive events whose occurrence must be genuinely random and which provide the randomness source that we exploit. Our protocol is semi-device-independent in the sense that the output entropy can be lower bounded based on experimental data and a few general assumptions about the setup alone. It is also practically relevant, which we demonstrate by realizing a simple optical implementation, achieving rates of 16.5 Mbits /s . Combining ease of implementation, a high rate, and a real-time entropy estimation, our protocol represents a promising approach intermediate between fully device-independent protocols and commercial quantum random number generators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y; Hieken, T; Mutter, R
2015-06-15
Purpose To investigate the feasibility of utilizing carbon fiducials to increase localization accuracy of lumpectomy cavity for partial breast irradiation (PBI). Methods Carbon fiducials were placed intraoperatively in the lumpectomy cavity following resection of breast cancer in 11 patients. The patients were scheduled to receive whole breast irradiation (WBI) with a boost or 3D-conformal PBI. WBI patients were initially setup to skin tattoos using lasers, followed by orthogonal kV on-board-imaging (OBI) matching to bone per clinical practice. Cone beam CT (CBCT) was acquired weekly for offline review. For the boost component of WBI and PBI, patients were setup with lasers,more » followed by OBI matching to fiducials, with final alignment by CBCT matching to fiducials. Using carbon fiducials as a surrogate for the lumpectomy cavity and CBCT matching to fiducials as the gold standard, setup uncertainties to lasers, OBI bone, OBI fiducials, and CBCT breast were compared. Results Minimal imaging artifacts were introduced by fiducials on the planning CT and CBCT. The fiducials were sufficiently visible on OBI for online localization. The mean magnitude and standard deviation of setup errors were 8.4mm ± 5.3 mm (n=84), 7.3mm ± 3.7mm (n=87), 2.2mm ± 1.6mm (n=40) and 4.8mm ± 2.6mm (n=87), for lasers, OBI bone, OBI fiducials and CBCT breast tissue, respectively. Significant migration occurred in one of 39 implanted fiducials in a patient with a large postoperative seroma. Conclusion OBI carbon fiducial-based setup can improve localization accuracy with minimal imaging artifacts. With increased localization accuracy, setup uncertainties can be reduced from 8mm using OBI bone matching to 3mm using OBI fiducial matching for PBI treatment. This work demonstrates the feasibility of utilizing carbon fiducials to increase localization accuracy to the lumpectomy cavity for PBI. This may be particularly attractive for localization in the setting of proton therapy and other scenarios in which metal clips are contraindicated.« less
SU-F-J-21: Clinical Evaluation of Surface Scanning Systems in Different Treatment Locations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moser, T; Karger, C; Stefanowicz, S
Purpose: To reduce imaging dose in fractionated IGRT, the ability of optical surface imaging systems (OSIS) to detect setup errors was tested. Therefore, clinical studies to evaluate for different treatment locations setup corrections derived by OSIS in comparison with x-ray image guidance in fractionated radiation therapy was performed. Methods: The setup correction accuracy of an OSIS system (AlignRT, VisionRT, London, UK) will be analysed for the 4 tumour locations Pelvis, Upper Abdomen, Thorax and Breast, 20 patients for each location in comparison to a different system (Sentinel, C-RAD, SE). For each patient, the setup corrections of the cone-beam computed tomographymore » (CBCT) of an Elekta Versa HD linear accelerator (Elekta, Crawley, UK) is considered as gold-standard and then compared with those of the OSIS for the first ten fractions retrospectively. There were no clinical decisions made based on the surrogate system. For the OSIS, the reference surface is highly important as it represents the actual ground truth. It can be obtained either with the system itself or the surface structure delineated in the planning CT can be imported via DICOM interface. In this paper, the first results for the treatment region thorax are presented. The reference image modalities were compared. Results: Table 1 displays the difference between the setup corrections obtained with OSIS and CBCT in lateral (LAT), longitudinal (LNG) and vertical (VRT) direction for the DICOM reference image. While the median deviations are within a few millimeters, some outliers showed large deviations. Generally, the mean deviation as well as the spread was smallest in lateral and largest in vertical direction. Conclusion: Although the system allows fast, simple and non-invasive determination of setup corrections, it should be evaluated treatment region dependant. Therefore, the study is ongoing. The application of OSIS may help to reduce the imaging dose for the patient. We gratefully acknowledge the support by our colleagues from the Radiological University Clinic Heidelberg, where the study was performed. This work was funded by the Federal Ministry of Education and Research (BMBF) Germany, grant number 01IB13001B.« less
Seo, Hogyu David; Lee, Daeyoup
2018-05-15
Random mutagenesis of a target gene is commonly used to identify mutations that yield the desired phenotype. Of the methods that may be used to achieve random mutagenesis, error-prone PCR is a convenient and efficient strategy for generating a diverse pool of mutants (i.e., a mutant library). Error-prone PCR is the method of choice when a researcher seeks to mutate a pre-defined region, such as the coding region of a gene while leaving other genomic regions unaffected. After the mutant library is amplified by error-prone PCR, it must be cloned into a suitable plasmid. The size of the library generated by error-prone PCR is constrained by the efficiency of the cloning step. However, in the fission yeast, Schizosaccharomyces pombe, the cloning step can be replaced by the use of a highly efficient one-step fusion PCR to generate constructs for transformation. Mutants of desired phenotypes may then be selected using appropriate reporters. Here, we describe this strategy in detail, taking as an example, a reporter inserted at centromeric heterochromatin.
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2014-01-01
This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.
The accuracy of the measurements in Ulugh Beg's star catalogue
NASA Astrophysics Data System (ADS)
Krisciunas, K.
1992-12-01
The star catalogue compiled by Ulugh Beg and his collaborators in Samarkand (ca. 1437) is the only catalogue primarily based on original observations between the times of Ptolemy and Tycho Brahe. Evans (1987) has given convincing evidence that Ulugh Beg's star catalogue was based on measurements made with a zodiacal armillary sphere graduated to 15(') , with interpolation to 0.2 units. He and Shevchenko (1990) were primarily interested in the systematic errors in ecliptic longitude. Shevchenko's analysis of the random errors was limited to the twelve zodiacal constellations. We have analyzed all 843 ecliptic longitudes and latitudes attributed to Ulugh Beg by Knobel (1917). This required multiplying all the longitude errors by the respective values of the cosine of the celestial latitudes. We find a random error of +/- 17minp 7 for ecliptic longitude and +/- 16minp 5 for ecliptic latitude. On the whole, the random errors are largest near the ecliptic, decreasing towards the ecliptic poles. For all of Ulugh Beg's measurements (excluding outliers) the mean systematic error is -10minp 8 +/- 0minp 8 for ecliptic longitude and 7minp 5 +/- 0minp 7 for ecliptic latitude, with the errors in the sense ``computed minus Ulugh Beg''. For the brighter stars (those designated alpha , beta , and gamma in the respective constellations), the mean systematic errors are -11minp 3 +/- 1minp 9 for ecliptic longitude and 9minp 4 +/- 1minp 5 for ecliptic latitude. Within the errors this matches the systematic error in both coordinates for alpha Vir. With greater confidence we may conclude that alpha Vir was the principal reference star in the catalogues of Ulugh Beg and Ptolemy. Evans, J. 1987, J. Hist. Astr. 18, 155. Knobel, E. B. 1917, Ulugh Beg's Catalogue of Stars, Washington, D. C.: Carnegie Institution. Shevchenko, M. 1990, J. Hist. Astr. 21, 187.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Random access to mobile networks with advanced error correction
NASA Technical Reports Server (NTRS)
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Zhang, Li-jie
2017-10-01
Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s
Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Tian, Yudong
2011-01-01
Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.