Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
Saito, Masatoshi
2009-08-01
Dual-energy computed tomography (DECT) has the potential for measuring electron density distribution in a human body to predict the range of particle beams for treatment planning in proton or heavy-ion radiotherapy. However, thus far, a practical dual-energy method that can be used to precisely determine electron density for treatment planning in particle radiotherapy has not been developed. In this article, another DECT technique involving a balanced filter method using a conventional x-ray tube is described. For the spectral optimization of DECT using balanced filters, the author calculates beam-hardening error and air kerma required to achieve a desired noise level in electron density and effective atomic number images of a cylindrical water phantom with 50 cm diameter. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimized parameters were applied to cases with different phantom diameters ranging from 5 to 50 cm for the calculations. The author predicts that the optimal combination of tube voltages would be 80 and 140 kV with Tb/Hf and Bi/Mo filter pairs for the 50-cm-diameter water phantom. When a single phantom calibration at a diameter of 25 cm was employed to cover all phantom sizes, maximum absolute beam-hardening errors were 0.3% and 0.03% for electron density and effective atomic number, respectively, over a range of diameters of the water phantom. The beam-hardening errors were 1/10 or less as compared to those obtained by conventional DECT, although the dose was twice that of the conventional DECT case. From the viewpoint of beam hardening and the tube-loading efficiency, the present DECT using balanced filters would be significantly more effective in measuring the electron density than the conventional DECT. Nevertheless, further developments of low-exposure imaging technology should be necessary as well as x-ray tubes with higher outputs to apply DECT coupled with the balanced filter method for clinical use.
Beam hardening correction in CT myocardial perfusion measurement
NASA Astrophysics Data System (ADS)
So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim
2009-05-01
This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, D; Gutierrez, A
Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less
Zhou, Bo; Wen, Di; Nye, Katelyn; Gilkeson, Robert C; Eck, Brendan; Jordan, David; Wilson, David L
2017-10-01
We have demonstrated the ability to identify coronary calcium, a reliable biomarker of coronary artery disease, using nongated, 2-shot, dual energy (DE) chest x-ray imaging. Here we will use digital simulations, backed up by measurements, to characterize DE calcium signals and the role of potential confounds such as beam hardening, x-ray scatter, cardiac motion, and pulmonary artery pulsation. For the DE calcium signal, we will consider quantification, as compared to CT calcium score, and visualization. We created stylized and anatomical digital 3D phantoms including heart, lung, coronary calcium, spine, ribs, pulmonary artery, and adipose. We simulated high and low kVp x-ray acquisitions with x-ray spectra, energy dependent attenuation, scatter, ideal detector, and automatic exposure control (AEC). Phantoms allowed us to vary adipose thickness, cardiac motion, etc. We used specialized dual energy coronary calcium (DECC) processing that includes corrections for scatter and beam hardening. Beam hardening over a wide range of adipose thickness (0-30 cm) reduced the change in intensity of a coronary artery calcification (ΔI CAC ) by < 3% in DECC images. Scatter correction errors of ±50% affected the calcium signal (ΔI CAC ) in DECC images ±9%. If a simulated pulmonary artery fills with blood between exposures, it can give rise to a residual signal in DECC images, explaining pulmonary artery visibility in some clinical images. Residual misregistration can be mostly compensated by integrating signals in an enlarged region encompassing registration artifacts. DECC calcium score compared favorably to CT mass and volume scores over a number of phantom perturbations. Simulations indicate that proper DECC processing can faithfully recover coronary calcium signals. Beam hardening, errors in scatter estimation, cardiac motion, calcium residual misregistration etc., are all manageable. Simulations are valuable as we continue to optimize DE coronary calcium image processing and quantitative analysis. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin
2016-10-01
X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.
Simulating the influence of scatter and beam hardening in dimensional computed tomography
NASA Astrophysics Data System (ADS)
Lifton, J. J.; Carmignato, S.
2017-10-01
Cone-beam x-ray computed tomography (XCT) is a radiographic scanning technique that allows the non-destructive dimensional measurement of an object’s internal and external features. XCT measurements are influenced by a number of different factors that are poorly understood. This work investigates how non-linear x-ray attenuation caused by beam hardening and scatter influences XCT-based dimensional measurements through the use of simulated data. For the measurement task considered, both scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, only beam hardening is found to influence dimensional measurements when evaluated using an advanced surface determination method. Based on the results presented, recommendations on the use of beam hardening and scatter correction for dimensional XCT are given.
Hachouf, N; Kharfi, F; Boucenna, A
2012-10-01
An ideal neutron radiograph, for quantification and 3D tomographic image reconstruction, should be a transmission image which exactly obeys to the exponential attenuation law of a monochromatic neutron beam. There are many reasons for which this assumption does not hold for high neutron absorbing materials. The main deviations from the ideal are due essentially to neutron beam hardening effect. The main challenges of this work are the characterization of neutron transmission through boron enriched steel materials and the observation of beam hardening. Then, in our work, the influence of beam hardening effect on neutron tomographic image, for samples based on these materials, is studied. MCNP and FBP simulation are performed to adjust linear attenuation coefficients data and to perform 2D tomographic image reconstruction with and without beam hardening corrections. A beam hardening correction procedure is developed and applied based on qualitative and quantitative analyses of the projections data. Results from original and corrected 2D reconstructed images obtained shows the efficiency of the proposed correction procedure. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lifton, Joseph J; Malcolm, Andrew A; McBride, John W
2015-01-01
X-ray computed tomography (CT) is a radiographic scanning technique for visualising cross-sectional images of an object non-destructively. From these cross-sectional images it is possible to evaluate internal dimensional features of a workpiece which may otherwise be inaccessible to tactile and optical instruments. Beam hardening is a physical process that degrades the quality of CT images and has previously been suggested to influence dimensional measurements. Using a validated simulation tool, the influence of spectrum pre-filtration and beam hardening correction are evaluated for internal and external dimensional measurements. Beam hardening is shown to influence internal and external dimensions in opposition, and to have a greater influence on outer dimensions compared to inner dimensions. The results suggest the combination of spectrum pre-filtration and a local gradient-based surface determination method are able to greatly reduce the influence of beam hardening in X-ray CT for dimensional metrology.
NASA Astrophysics Data System (ADS)
Lopez-Rendon, X.; Zhang, G.; Bosmans, H.; Oyen, R.; Zanca, F.
2014-03-01
Purpose: To estimate the consequences on dosimetric applications when a CT bowtie filter is modeled by means of full beam hardening versus partial beam hardening. Method: A model of source and filtration for a CT scanner as developed by Turner et. al. [1] was implemented. Specific exposures were measured with the stationary CT X-ray tube in order to assess the equivalent thickness of Al of the bowtie filter as a function of the fan angle. Using these thicknesses, the primary beam attenuation factors were calculated from the energy dependent photon mass attenuation coefficients and used to include beam hardening in the spectrum. This was compared to a potentially less computationally intensive approach, which accounts only partially for beam hardening, by giving the photon spectrum a global (energy independent) fan angle specific weighting factor. Percentage differences between the two methods were quantified by calculating the dose in air after passing several water equivalent thicknesses representative for patients having different BMI. Specifically, the maximum water equivalent thickness of the lateral and anterior-posterior dimension and of the corresponding (half) effective diameter were assessed. Results: The largest percentage differences were found for the thickest part of the bowtie filter and they increased with patient size. For a normal size patient they ranged from 5.5% at half effective diameter to 16.1% for the lateral dimension; for the most obese patient they ranged from 7.7% to 19.3%, respectively. For a complete simulation of one rotation of the x-ray tube, the proposed method was 12% faster than the complete simulation of the bowtie filter. Conclusion: The need for simulating the beam hardening of the bow tie filter in Monte Carlo platforms for CT dosimetry will depend on the required accuracy.
Surface hardening of steels with a strip-shaped beam of a high-power CO{sub 2} laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubovskii, P.E.; Kovsh, I.B.; Strekalova, M.S.
1994-12-01
A comparative analysis was made of the surface hardening of steel 45 by high-power CO{sub 2} laser beams with a rectangular strip-like cross section and a traditional circular cross section. This was done under various conditions. The treatment with the strip-like beam ensured a higher homogeneity of the hardened layer and made it possible to increase the productivity by a factor of 2-4 compared with the treatment by a beam of the same power but with a circular cross section. 6 refs., 5 figs.
NASA Astrophysics Data System (ADS)
Dubovskii, P. E.; Kovsh, Ivan B.; Strekalova, M. S.; Sisakyan, I. N.
1994-12-01
A comparative analysis was made of the surface hardening of steel 45 by high-power CO2 laser beams with a rectangular strip-like cross section and a traditional circular cross section. This was done under various conditions. The treatment with the strip-like beam ensured a higher homogeneity of the hardened layer and made it possible to increase the productivity by a factor of 2-4 compared with the treatment by a beam of the same power but with a circular cross section.
Shi, Hongli; Yang, Zhi; Luo, Shuqian
2017-01-01
The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.
Multi-species beam hardening calibration device for x-ray microtomography
NASA Astrophysics Data System (ADS)
Evershed, Anthony N. Z.; Mills, David; Davis, Graham
2012-10-01
Impact-source X-ray microtomography (XMT) is a widely-used benchtop alternative to synchrotron radiation microtomography. Since X-rays from a tube are polychromatic, however, greyscale `beam hardening' artefacts are produced by the preferential absorption of low-energy photons in the beam path. A multi-material `carousel' test piece was developed to offer a wider range of X-ray attenuations from well-characterised filters than single-material step wedges can produce practically, and optimization software was developed to produce a beam hardening correction by use of the Nelder-Mead optimization method, tuned for specimens composed of other materials (such as hydroxyapatite [HA] or barium for dental applications.) The carousel test piece produced calibration polynomials reliably and with a significantly smaller discrepancy between the calculated and measured attenuations than the calibration step wedge previously in use. An immersion tank was constructed and used to simplify multi-material samples in order to negate the beam hardening effect of low atomic number materials within the specimen when measuring mineral concentration of higher-Z regions. When scanned in water at an acceleration voltage of 90 kV a Scanco AG hydroxyapatite / poly(methyl methacrylate) calibration phantom closely approximates a single-material system, producing accurate hydroxyapatite concentration measurements. This system can then be corrected for beam hardening for the material of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casar, B; Carot, I Mendez; Peterlin, P
2016-06-15
Purpose: Aim of the multi-centre study was to analyse beam hardening effect of the Integral Quality Monitor (IQM) for high energy photon beams used in radiotherapy with linear accelerators. Generic values for attenuation coefficient k(IQM) of IQM system were additionally investigated. Methods: Beam hardening effect of the IQM system was studied for a set of standard nominal photon energies (6 MV–18 MV) and two flattening filter free (FFF) energies (6 MV FFF and 10 MV FFF). PDD curves were measured and analysed for various square radiation fields, with and without IQM in place. Differences between PDD curves were statistically analysedmore » through comparison of respective PDD-20,10 values. Attenuation coefficients k(IQM) were determined for the same range of photon energies. Results: Statistically significant differences in beam qualities for all evaluated high energy photon beams were found, comparing PDD-20,10 values derived from PDD curves with and without IQM in place. Significance of beam hardening effect was statistically proven with high confidence (p < 0,01) for all analysed photon beams except for 15 MV (p = 0,078), although relative differences in beam qualities were minimal, ranging from 0,1 % to 0,5 %. Attenuation of the IQM system showed negligible dependence on radiation field size. However, clinically important dependence of kIQM versus TPRs20,10 was found: 0,941 for 6 MV photon beams, to 0,959 for 18 MV photon beams, with highest uncertainty below 0,006. k(IQM) versus TPRs were tabulated and polynomial equation for the determination of k(IQM) is suggested for clinical use. Conclusion: There was no clinically relevant beam hardening, when IQM system was on linear accelerators. Consequently, no additional commissioning is needed for the IQM system regarding the determination of beam qualities. Generic values for k(IQM) are proposed and can be used as tray factors for complete range of examined photon beam energies.« less
Characterization and correction of cupping effect artefacts in cone beam CT
Hunter, AK; McDavid, WD
2012-01-01
Objective The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). Methods A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. Results The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Conclusion Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. PMID:22378754
NASA Astrophysics Data System (ADS)
Hermus, James; Szczykutowicz, Timothy P.; Strother, Charles M.; Mistretta, Charles
2014-03-01
When performing Computed Tomographic (CT) image reconstruction on digital subtraction angiography (DSA) projections, loss of vessel contrast has been observed behind highly attenuating anatomy, such as dental implants and large contrast filled aneurysms. Because this typically occurs only in a limited range of projection angles, the observed contrast time course can potentially be altered. In this work, we have developed a model for acquiring DSA projections that models both the polychromatic nature of the x-ray spectrum and the x-ray scattering interactions to investigate this problem. In our simulation framework, scatter and beam hardening contributions to vessel dropout can be analyzed separately. We constructed digital phantoms with large clearly defined regions containing iodine contrast, bone, soft issue, titanium (dental implants) or combinations of these materials. As the regions containing the materials were large and rectangular, when the phantoms were forward projected, the projections contained uniform regions of interest (ROI) and enabled accurate vessel dropout analysis. Two phantom models were used, one to model the case of a vessel behind a large contrast filled aneurysm and the other to model a vessel behind a dental implant. Cases in which both beam hardening and scatter were turned off, only scatter was turned on, only beam hardening was turned on, and both scatter and beam hardening were turned on, were simulated for both phantom models. The analysis of this data showed that the contrast degradation is primarily due to scatter. When analyzing the aneurysm case, 90.25% of the vessel contrast was lost in the polychromatic scatter image, however only 50.5% of the vessel contrast was lost in the beam hardening only image. When analyzing the teeth case, 44.2% of the vessel contrast was lost in the polychromatic scatter image and only 26.2% of the vessel contrast was lost in the beam hardening only image.
Application of Polychromatic µCT for Mineral Density Determination
Zou, W.; Hunter, N.; Swain, M.V.
2011-01-01
Accurate assessment of mineral density (MD) provides information critical to the understanding of mineralization processes of calcified tissues, including bones and teeth. High-resolution three-dimensional assessment of the MD of teeth has been demonstrated by relatively inaccessible synchrotron radiation microcomputed tomography (SRµCT). While conventional desktop µCT (CµCT) technology is widely available, polychromatic source and cone-shaped beam geometry confound MD assessment. Recently, considerable attention has been given to optimizing quantitative data from CµCT systems with polychromatic x-ray sources. In this review, we focus on the approaches that minimize inaccuracies arising from beam hardening, in particular, beam filtration during the scan, beam-hardening correction during reconstruction, and mineral density calibration. Filtration along with lowest possible source voltage results in a narrow and near-single-peak spectrum, favoring high contrast and minimal beam-hardening artifacts. More effective beam monochromatization approaches are described. We also examine the significance of beam-hardening correction in determining the accuracy of mineral density estimation. In addition, standards for the calibration of reconstructed grey-scale attenuation values against MD, including K2PHO4 liquid phantom, and polymer-hydroxyapatite (HA) and solid hydroxyapatite (HA) phantoms, are discussed. PMID:20858779
Surface hardening of 30CrMnSiA steel using continuous electron beam
NASA Astrophysics Data System (ADS)
Fu, Yulei; Hu, Jing; Shen, Xianfeng; Wang, Yingying; Zhao, Wansheng
2017-11-01
30CrMnSiA high strength low alloy (HSLA) carbon structural steel is typically applied in equipment manufacturing and aerospace industries. In this work, the effects of continuous electron beam treatment on the surface hardening and microstructure modifications of 30CrMnSiA are investigated experimentally via a multi-purpose electron beam machine Pro-beam system. Micro hardness value in the electron beam treated area shows a double to triple increase, from 208 HV0.2 on the base metal to 520 HV0.2 on the irradiated area, while the surface roughness is relatively unchanged. Surface hardening parameters and mechanisms are clarified by investigation of the microstructural modification and the phase transformation both pre and post irradiation. The base metal is composed of ferrite and troostite. After continuous electron beam irradiation, the micro structure of the electron beam hardened area is composed of acicular lower bainite, feathered upper bainite and part of lath martensite. The optimal input energy density for 30CrMnSiA steel in this study is of 2.5 kJ/cm2 to attain the proper hardened depth and peak hardness without the surface quality deterioration. When the input irradiation energy exceeds 2.5 kJ/cm2 the convective mixing of the melted zone will become dominant. In the area with convective mixing, the cooling rate is relatively lower, thus the micro hardness is lower. The surface quality will deteriorate. Chemical composition and surface roughness pre and post electron beam treatment are also compared. The technology discussed give a picture of the potential of electron beam surface treatment for improving service life and reliability of the 30CrMnSiA steel.
Reducing beam hardening effects and metal artefacts in spectral CT using Medipix3RX
NASA Astrophysics Data System (ADS)
Rajendran, K.; Walsh, M. F.; de Ruiter, N. J. A.; Chernoglazov, A. I.; Panta, R. K.; Butler, A. P. H.; Butler, P. H.; Bell, S. T.; Anderson, N. G.; Woodfield, T. B. F.; Tredinnick, S. J.; Healy, J. L.; Bateman, C. J.; Aamir, R.; Doesburg, R. M. N.; Renaud, P. F.; Gieseg, S. P.; Smithies, D. J.; Mohr, J. L.; Mandalika, V. B. H.; Opie, A. M. T.; Cook, N. J.; Ronaldson, J. P.; Nik, S. J.; Atharifard, A.; Clyne, M.; Bones, P. J.; Bartneck, C.; Grasset, R.; Schleich, N.; Billinghurst, M.
2014-03-01
This paper discusses methods for reducing beam hardening effects and metal artefacts using spectral x-ray information in biomaterial samples. A small-animal spectral scanner was operated in the 15 to 80 keV x-ray energy range for this study. We use the photon-processing features of a CdTe-Medipix3RX ASIC in charge summing mode to reduce beam hardening and associated artefacts. We present spectral data collected for metal alloy samples, its analysis using algebraic 3D reconstruction software and volume visualisation using a custom volume rendering software. The cupping effect and streak artefacts are quantified in the spectral datasets. The results show reduction in beam hardening effects and metal artefacts in the narrow high energy range acquired using the spectroscopic detector. A post-reconstruction comparison between CdTe-Medipix3RX and Si-Medipix3.1 is discussed. The raw data and processed data are made available (http://hdl.handle.net/10092/8851) for testing with other software routines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krivezhenko, Dina S., E-mail: dinylkaa@yandex.ru; Drobyaz, Ekaterina A., E-mail: ekaterina.drobyaz@yandex.ru; Bataev, Ivan A., E-mail: ivanbataev@ngs.ru
2015-10-27
An investigation of surface-hardened materials obtained by cladding with an electron beam injected into the air atmosphere was carried out. Structural investigations of coatings revealed that an increase in boron carbide concentration in a saturating mixture contributed to a rise of a volume fraction of iron borides in coatings. The maximum hardened depth reached 2 mm. Hardened layers were characterized by the formation of heterogeneous structure which consisted of iron borides and titanium carbides distributed uniformly in the eutectic matrix. Areas of titanium boride conglomerations were detected. It was found that an increase in the boron carbide content led to anmore » enhancement in hardness of the investigated materials. Friction testing against loosely fixed abrasive particles showed that electron-beam cladding of powder mixtures containing boron carbides, titanium, and iron in air atmosphere allowed enhancing a resistance of materials hardened in two times.« less
NASA Astrophysics Data System (ADS)
Krivezhenko, Dina S.; Drobyaz, Ekaterina A.; Bataev, Ivan A.; Chuchkova, Lyubov V.
2015-10-01
An investigation of surface-hardened materials obtained by cladding with an electron beam injected into the air atmosphere was carried out. Structural investigations of coatings revealed that an increase in boron carbide concentration in a saturating mixture contributed to a rise of a volume fraction of iron borides in coatings. The maximum hardened depth reached 2 mm. Hardened layers were characterized by the formation of heterogeneous structure which consisted of iron borides and titanium carbides distributed uniformly in the eutectic matrix. Areas of titanium boride conglomerations were detected. It was found that an increase in the boron carbide content led to an enhancement in hardness of the investigated materials. Friction testing against loosely fixed abrasive particles showed that electron-beam cladding of powder mixtures containing boron carbides, titanium, and iron in air atmosphere allowed enhancing a resistance of materials hardened in two times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gill, K; Aldoohan, S; Collier, J
Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measuremore » CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.« less
NASA Astrophysics Data System (ADS)
Schumacher, David; Sharma, Ravi; Grager, Jan-Carl; Schrapp, Michael
2018-07-01
Photon counting detectors (PCD) offer new possibilities for x-ray micro computed tomography (CT) in the field of non-destructive testing. For large and/or dense objects with high atomic numbers the problem of scattered radiation and beam hardening severely influences the image quality. This work shows that using an energy discriminating PCD based on CdTe allows to address these problems by intrinsically reducing both the influence of scattering and beam hardening. Based on 2D-radiographic measurements it is shown that by energy thresholding the influence of scattered radiation can be reduced by up to in case of a PCD compared to a conventional energy-integrating detector (EID). To demonstrate the capabilities of a PCD in reducing beam hardening, cupping artefacts are analyzed quantitatively. The PCD results show that the higher the energy threshold is set, the lower the cupping effect emerges. But since numerous beam hardening correction algorithms exist, the results of the PCD are compared to EID results corrected by common techniques. Nevertheless, the highest energy thresholds yield lower cupping artefacts than any of the applied correction algorithms. As an example of a potential industrial CT application, a turbine blade is investigated by CT. The inner structure of the turbine blade allows for comparing the image quality between PCD and EID in terms of absolute contrast, as well as normalized signal-to-noise and contrast-to-noise ratio. Where the absolute contrast can be improved by raising the energy thresholds of the PCD, it is found that due to lower statistics the normalized contrast-to-noise-ratio could not be improved compared to the EID. These results might change to the contrary when discarding pre-filtering of the x-ray spectra and thus allowing more low-energy photons to reach the detectors. Despite still being in the early phase in technological progress, PCDs already allow to improve CT image quality compared to conventional detectors in terms of scatter and beam hardening reduction.
Segmentation-free empirical beam hardening correction for CT.
Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc
2015-02-01
The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.
Segmentation-free empirical beam hardening correction for CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schüller, Sören; Sawall, Stefan; Stannigel, Kai
2015-02-15
Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less
Reduction of metal artifacts: beam hardening and photon starvation effects
NASA Astrophysics Data System (ADS)
Yadava, Girijesh K.; Pal, Debashish; Hsieh, Jiang
2014-03-01
The presence of metal-artifacts in CT imaging can obscure relevant anatomy and interfere with disease diagnosis. The cause and occurrence of metal-artifacts are primarily due to beam hardening, scatter, partial volume and photon starvation; however, the contribution to the artifacts from each of them depends on the type of hardware. A comparison of CT images obtained with different metallic hardware in various applications, along with acquisition and reconstruction parameters, helps understand methods for reducing or overcoming such artifacts. In this work, a metal beam hardening correction (BHC) and a projection-completion based metal artifact reduction (MAR) algorithms were developed, and applied on phantom and clinical CT scans with various metallic implants. Stainless-steel and Titanium were used to model and correct for metal beam hardening effect. In the MAR algorithm, the corrupted projection samples are replaced by the combination of original projections and in-painted data obtained by forward projecting a prior image. The data included spine fixation screws, hip-implants, dental-filling, and body extremity fixations, covering range of clinically used metal implants. Comparison of BHC and MAR on different metallic implants was used to characterize dominant source of the artifacts, and conceivable methods to overcome those. Results of the study indicate that beam hardening could be a dominant source of artifact in many spine and extremity fixations, whereas dental and hip implants could be dominant source of photon starvation. The BHC algorithm could significantly improve image quality in CT scans with metallic screws, whereas MAR algorithm could alleviate artifacts in hip-implants and dentalfillings.
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience
NASA Astrophysics Data System (ADS)
Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.
2017-06-01
A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H; Dolly, S; Zhao, T
Purpose: A prototype reconstruction algorithm that can provide direct electron density (ED) images from single energy CT scans is being currently developed by Siemens Healthcare GmbH. This feature can eliminate the need for kV specific calibration curve for radiation treatemnt planning. An added benefit is that beam-hardening artifacts are also reduced on direct-ED images due to the underlying material decomposition. This study is to quantitatively analyze the reduction of beam-hardening artifacts on direct-ED images and suggest additional clinical usages. Methods: HU and direct-ED images were reconstructed on a head phantom scanned on a Siemens Definition AS CT scanner at fivemore » tube potentials of 70kV, 80kV, 100kV, 120kV and 140kV respectively. From these images, mean, standard deviation (SD), and local NPS were calculated for regions of interest (ROI) of same locations and sizes. A complete analysis of beam-hardening artifact reduction and image quality improvement was conducted. Results: Along with the increase of tube potentials, ROI means and SDs decrease on both HU and direct-ED images. The mean value differences between HU and direct-ED images are up to 8% with absolute value of 2.9. Compared to that on HU images, the SDs are lower on direct-ED images, and the differences are up to 26%. Interestingly, the local NPS calculated from direct-ED images shows consistent values in the low spatial frequency domain for images acquired from all tube potential settings, while varied dramatically on HU images. This also confirms the beam -hardening artifact reduction on ED images. Conclusions: The low SDs on direct-ED images and relative consistent NPS values in the low spatial frequency domain indicate a reduction of beam-hardening artifacts. The direct-ED image has the potential to assist in more accurate organ contouring, and is a better fit for the desired purpose of CT simulations for radiotherapy.« less
Determination of shielding requirements for mammography.
Okunade, Akintunde Akangbe; Ademoroti, Olalekan Albert
2004-05-01
Shielding requirements for mammography when considerations are to be given to attenuation by compression paddle, breast tissue, grid and image receptor (intervening materials) has been investigated. By matching of the attenuation and hardening properties, comparisons are made between shielding afforded by breast tissue materials (water, Lucite and 50%-50% adipose-glandular tissue) and some materials considered for shielding diagnostic x-ray beams, namely lead, steel and gypsum wallboard. Results show that significant differences exist between the thickness required to produce equal attenuation and that required to produce equal hardening of a given incident beam. While attenuation equivalent thickness produces equal exposure, it does not produce equal hardening. For shielding purposes, equivalence in exposure reduction without equivalence in penetrating power of an emerging beam does not amount to equivalence in shielding affordable by two different materials. Presented are models and results of sample calculations of additional shielding requirements apart from that provided by intervening materials. The shielding requirements for the integrated beam emerging from intervening materials are different from those for the integrated beam emerging from materials (lead/steel/gypsum wallboard) with attenuation equivalent thicknesses of these intervening materials.
Process for hardening the surface of polymers
Mansur, Louis K.; Lee, Eal H.
1992-01-01
Hard surfaced polymers and the method for making them is generally described. Polymers are subjected to simultaneous multiple ion beam bombardment, that results in a hardening of the surface and improved wear resistance.
NASA Astrophysics Data System (ADS)
Grafe, S.; Hengst, P.; Buchwalder, A.; Zenker, R.
2018-06-01
The electron beam hardening (EBH) process is one of today’s most innovative industrial technologies. Due to the almost inertia-free deflection of the EB (up to 100 kHz), the energy transfer function can be adapted locally to the component geometry and/or loading conditions. The current state-of-the-art technology is that of EBH with continuous workpiece feed. Due to the large range of parameters, the potentials and limitations of EBH using the flash technique (without workpiece feed) have not been investigated sufficiently to date. The aim of this research was to generate surface isothermal energy transfer within the flash field. This paper examines the effects of selected process parameters on the EBH surface layer microstructure and the properties achieved when treating hardened and tempered C45E steel. When using constant point distribution within the flash field and a constant beam current, surface isothermal energy input was not generated. However, by increasing the deflection frequency, point density and beam current, a more homogeneous EBH surface layer microstructure could be achieved, along with higher surface hardness and greater surface hardening depths. Furthermore, using temperature-controlled power regulation, surface isothermal energy transfer could be realised over a larger area in the centre of the sample.
Nd-glass laser for deep-penetration welding and hardening
NASA Astrophysics Data System (ADS)
Kayukov, Serguei V.; Yaresko, Sergey I.; Mikheyev, Pavel A.
2000-04-01
Pulsed Nd-glass lasers usually have low beam quality (200 - 300 mm-mrad), and are used only for surface hardening of metals. However, high pulse energy make them feasible for deep penetration welding if their beam quality could be improved. We investigated beam properties of Nd-glass laser with unstable resonator with semitransparent output coupler (URSOC). We had found that beam divergence of the laser with URSOC was an order of magnitude smaller than that of the laser with stable resonator. The achieved beam quality (40 - 50 mm-mrad) permitted to perform deep penetration welding with the aspect ratio of approximately 8. For beam divergence of 3 mrad melt depth of 6.3 mm was achieved with the ratio of depth to pulse energy of 0.27 mm/J.
SU-E-J-125: Classification of CBCT Noises in Terms of Their Contribution to Proton Range Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brousmiche, S; Orban de Xivry, J; Macq, B
2014-06-01
Purpose: This study assesses the potential use of CBCT images in adaptive protontherapy by estimating the contribution of the main sources of noise and calibration errors to the proton range uncertainty. Methods: Measurements intended to highlight each particular source have been achieved by adapting either the testbench configuration, e.g. use of filtration, fan-beam collimation, beam stop arrays, phantoms and detector reset light, or the sequence of correction algorithms including water precorrection. Additional Monte-Carlo simulations have been performed to complement these measurements, especially for the beam hardening and the scatter cases. Simulations of proton beams penetration through the resulting images havemore » then been carried out to quantify the range change due to these effects. The particular case of a brain irradiation is considered mainly because of the multiple effects that the skull bones have on the internal soft tissues. Results: On top of the range error sources is the undercorrection of scatter. Its influence has been analyzed from a comparison of fan-beam and full axial FOV acquisitions. In this case, large range errors of about 12 mm can be reached if the assumption is made that the scatter has only a constant contribution over the projection images. Even the detector lag, which a priori induces a much smaller effect, has been shown to contribute for up to 2 mm to the overall error if its correction only aims at reducing the skin artefact. This last result can partially be understood by the larger interface between tissues and bones inside the skull. Conclusion: This study has set the basis of a more systematical analysis of the effect CBCT noise on range uncertainties based on a combination of measurements, simulations and theoretical results. With our method, even more subtle effects such as the cone-beam artifact or the detector lag can be assessed. SBR and JOR are financed by iMagX, a public-private partnership between the region Wallone of Belgium and IBA under convention #1217662.« less
Process for hardening the surface of polymers
Mansur, L.K.; Lee, E.H.
1992-07-14
Hard surfaced polymers and the method for making them is generally described. Polymers are subjected to simultaneous multiple ion beam bombardment, that results in a hardening of the surface and improved wear resistance. 1 figure.
Laser Transformation Hardening of Firing Zone Cutout Cams.
1981-06-01
bath nitriding to case harden firing zone cutout cams for the Mk 10 Guided Missile Launcher System (GMLS). These cams, machined of 4340 steel ...salt bath nitriding to case harden firing zone cutout cams for the Mk 10 Guided Missile Launcher System (GMLS). These cams, machined of 4340 steel ...Patterns ........ ................ 8 9 Laser Beam Step Pattern ...... .................. .. 10 10 Hardness Profile, 4340 Steel
Stenner, Philip; Schmidt, Bernhard; Allmendinger, Thomas; Flohr, Thomas; Kachelrie, Marc
2010-06-01
In cardiac perfusion examinations with computed tomography (CT) large concentrations of iodine in the ventricle and in the descending aorta cause beam hardening artifacts that can lead to incorrect perfusion parameters. The aim of this study is to reduce these artifacts by performing an iterative correction and by accounting for the 3 materials soft tissue, bone, and iodine. Beam hardening corrections are either implemented as simple precorrections which cannot account for higher order beam hardening effects, or as iterative approaches that are based on segmenting the original image into material distribution images. Conventional segmentation algorithms fail to clearly distinguish between iodine and bone. Our new algorithm, DIBHC, calculates the time-dependent iodine distribution by analyzing the voxel changes of a cardiac perfusion examination (typically N approximately 15 electrocardiogram-correlated scans distributed over a total scan time up to T approximately 30 s). These voxel dynamics are due to changes in contrast agent. This prior information allows to precisely distinguish between bone and iodine and is key to DIBHC where each iteration consists of a multimaterial (soft tissue, bone, iodine) polychromatic forward projection, a raw data comparison and a filtered backprojection. Simulations with a semi-anthropomorphic dynamic phantom and clinical scans using a dual source CT scanner with 2 x 128 slices, a tube voltage of 100 kV, a tube current of 180 mAs, and a rotation time of 0.28 seconds have been carried out. The uncorrected images suffer from beam hardening artifacts that appear as dark bands connecting large concentrations of iodine in the ventricle, aorta, and bony structures. The CT-values of the affected tissue are usually underestimated by roughly 20 HU although deviations of up to 61 HU have been observed. For a quantitative evaluation circular regions of interest have been analyzed. After application of DIBHC the mean values obtained deviate by only 1 HU for the simulations and the corrected values show an increase of up to 61 HU for the measurements. One iteration of DIBHC greatly reduces the beam hardening artifacts induced by the contrast agent dynamics (and those due to bone) now allowing for an improved assessment of contrast agent uptake in the myocardium which is essential for determining myocardial perfusion.
Nakashima, Yoshito; Nakano, Tsukasa
2014-01-01
Iodine is commonly used as a contrast agent in nonmedical science and engineering, for example, to visualize Darcy flow in porous geological media using X-ray computed tomography (CT). Undesirable beam hardening artifacts occur when a polychromatic X-ray source is used, which makes the quantitative analysis of CT images difficult. To optimize the chemistry of a contrast agent in terms of the beam hardening reduction, we performed computer simulations and generated synthetic CT images of a homogeneous cylindrical sand-pack (diameter, 28 or 56 mm; porosity, 39 vol.% saturated with aqueous suspensions of heavy elements assuming the use of a polychromatic medical CT scanner. The degree of cupping derived from the beam hardening was assessed using the reconstructed CT images to find the chemistry of the suspension that induced the least cupping. The results showed that (i) the degree of cupping depended on the position of the K absorption edge of the heavy element relative to peak of the polychromatic incident X-ray spectrum, (ii) (53)I was not an ideal contrast agent because it causes marked cupping, and (iii) a single element much heavier than (53)I ((64)Gd to (79)Au) reduced the cupping artifact significantly, and a four-heavy-element mixture of elements from (64)Gd to (79)Au reduced the artifact most significantly.
NASA Astrophysics Data System (ADS)
Yuan, Fusong; Lv, Peijun; Yang, Huifang; Wang, Yong; Sun, Yuchun
2015-07-01
Objectives: Based on the pixel gray value measurements, establish a beam-hardening artifacts index of the cone-beam CT tomographic image, and preliminarily evaluate its applicability. Methods: The 5mm-diameter metal ball and resin ball were fixed on the light-cured resin base plate respectively, while four vitro molars were fixed above and below the ball, on the left and right respectively, which have 10mm distance with the metal ball. Then, cone beam CT was used to scan the fixed base plate twice. The same layer tomographic images were selected from the two data and imported into the Photoshop software. The circle boundary was built through the determination of the center and radius of the circle, according to the artifact-free images section. Grayscale measurement tools were used to measure the internal boundary gray value G0, gray value G1 and G2 of 1mm and 20mm artifacts outside the circular boundary, the length L1 of the arc with artifacts in the circular boundary, the circumference L2. Hardening artifacts index was set A = (G1 / G0) * 0.5 + (G2 / G1) * 0.4 + (L2 / L1) * 0.1. Then, the A values of metal and resin materials were calculated respectively. Results: The A value of cobalt-chromium alloy material is 1, and resin material is 0. Conclusion: The A value reflects comprehensively the three factors of hardening artifacts influencing normal oral tissue image sharpness of cone beam CT. The three factors include relative gray value, the decay rate and range of artifacts.
NASA Astrophysics Data System (ADS)
Losinskaya, A. A.; Lozhkina, E. A.; Bardin, A. I.
2017-12-01
At the present time, the actual problem of materials science is the increase in the steels performance characteristics. In the paper some mechanical properties of the case-hardened materials received by non-vacuum electron-beam cladding of carbon fibers are determined. The depth of the hardened layers varies from 1.5 to 3 mm. The impact strength of the samples exceeds 50 J/cm2. The wear resistance of the coatings obtained exceeds the properties of steel 20 after cementation and quenching with low tempering. The results of a study of the microhardness of the resulting layers and the microstructure are also given. The hardness of the surface layers exceeds 5700 MPa.
Laser Surface Hardening of Groove Edges
NASA Astrophysics Data System (ADS)
Hussain, A.; Hamdani, A. H.; Akhter, R.; Aslam, M.
2013-06-01
Surface hardening of groove-edges made of 3Cr13 Stainless Steel has been carried out using 500 W CO2 laser with a rectangular beam of 2.5×3 mm2. The processing speed was varied from 150-500 mm/min. It was seen that the hardened depth increases with increase in laser interaction time. A maximum hardened depth of around 1mm was achieved. The microhardness of the transformed zone was 2.5 times the hardness of base metal. The XRD's and microstructural analysis were also reported.
Artifact Reduction in X-Ray CT Images of Al-Steel-Perspex Specimens Mimicking a Hip Prosthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madhogarhia, Manish; Munshi, P.; Lukose, Sijo
2008-09-26
X-ray Computed Tomography (CT) is a relatively new technique developed in the late 1970's, which enables the nondestructive visualization of the internal structure of objects. Beam hardening caused by the polychromatic spectrum is an important problem in X-ray computed tomography (X-CT). It leads to various artifacts in reconstruction images and reduces image quality. In the present work we are considering the Artifact Reduction in Total Hip Prosthesis CT Scan which is a problem of medical imaging. We are trying to reduce the cupping artifact induced by beam hardening as well as metal artifact as they exist in the CT scanmore » of a human hip after the femur is replaced by a metal implant. The correction method for beam hardening used here is based on a previous work. Simulation study for the present problem includes a phantom consisting of mild steel, aluminium and perspex mimicking the photon attenuation properties of a hum hip cross section with metal implant.« less
Accurate Measurement of Bone Density with QCT
NASA Technical Reports Server (NTRS)
Cleek, Tammy M.; Beaupre, Gary S.; Matsubara, Miki; Whalen, Robert T.; Dalton, Bonnie P. (Technical Monitor)
2002-01-01
The objective of this study was to determine the accuracy of bone density measurement with a new OCT technology. A phantom was fabricated using two materials, a water-equivalent compound and hydroxyapatite (HA), combined in precise proportions (QRM GrnbH, Germany). The phantom was designed to have the approximate physical size and range in bone density as a human calcaneus, with regions of 0, 50, 100, 200, 400, and 800 mg/cc HA. The phantom was scanned at 80, 120 and 140 KVp with a GE CT/i HiSpeed Advantage scanner. A ring of highly attenuating material (polyvinyl chloride or teflon) was slipped over the phantom to alter the image by introducing non-axi-symmetric beam hardening. Images were corrected with a new OCT technology using an estimate of the effective X-ray beam spectrum to eliminate beam hardening artifacts. The algorithm computes the volume fraction of HA and water-equivalent matrix in each voxel. We found excellent agreement between expected and computed HA volume fractions. Results were insensitive to beam hardening ring material, HA concentration, and scan voltage settings. Data from all 3 voltages with a best fit linear regression are displays.
Multiscale characterization and mechanical modeling of an Al-Zn-Mg electron beam weld
NASA Astrophysics Data System (ADS)
Puydt, Quentin; Flouriot, Sylvain; Ringeval, Sylvain; Parry, Guillaume; De Geuser, Frédéric; Deschamps, Alexis
Welding of precipitation hardening alloys results in multi-scale microstructural heterogeneities, from the hardening nano-scale precipitates to the micron-scale solidification structures and to the component geometry. This heterogeneity results in a complex mechanical response, with gradients in strength, stress triaxiality and damage initiation sites.
Simulations of x-ray speckle-based dark-field and phase-contrast imaging with a polychromatic beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdora, Marie-Christine, E-mail: marie-christine.zdora@diamond.ac.uk; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE; Department of Physics & Astronomy, University College London, London WC1E 6BT
2015-09-21
Following the first experimental demonstration of x-ray speckle-based multimodal imaging using a polychromatic beam [I. Zanette et al., Phys. Rev. Lett. 112(25), 253903 (2014)], we present a simulation study on the effects of a polychromatic x-ray spectrum on the performance of this technique. We observe that the contrast of the near-field speckles is only mildly influenced by the bandwidth of the energy spectrum. Moreover, using a homogeneous object with simple geometry, we characterize the beam hardening artifacts in the reconstructed transmission and refraction angle images, and we describe how the beam hardening also affects the dark-field signal provided by specklemore » tracking. This study is particularly important for further implementations and developments of coherent speckle-based techniques at laboratory x-ray sources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ai, H; Wendt, R
2016-06-15
Purpose: To assess the effect of beam hardening on measured CT HU values. Methods: An anthropomorphic knee phantom was scanned with the CT component of a GE Discovery 690 PET/CT scanner (120kVp, 300mAs, 40?0.625mm collimation, pitch=0.984, FOV=500mm, matrix=512?512) with four different scan setups, each of which induces different degrees of beam hardening by introducing additional attenuation media into the field of view. Homogeneous voxels representing “soft tissue” and “bone” were segmented by HU thresholding followed by a 3D morphological erosion operation which removes the non-homogenous voxels located on the interface of thresholded tissue mask. HU values of segmented “soft tissue”more » and “bone” were compared.Additionally, whole-body CT data with coverage from the skull apex to the end of toes were retrospectively retrieved from seven PET/CT exams to evaluate the effect of beam hardening in vivo. Homogeneous bone voxels were segmented with the same method previously described. Total In-Slice Attenuation (TISA) for each CT slice, defined as the summation of HU values over all voxels within a CT slice, was calculated for all slices of the seven whole-body CT datasets and evaluated against the mean HU values of homogeneous bone voxels within that slice. Results: HU values measured from the phantom showed that while “soft tissue” HU values were unaffected, added attenuation within the FOV caused noticeable decreases in the measured HU values of “bone” voxels. A linear relationship was observed between bone HU and TISA for slices of the torso and legs, but not of the skull. Conclusion: Beam hardening effect is not an issue of concern for voxels with HU in the soft tissue range, but should not be neglected for bone voxels. A linear relationship exists between bone HU and the associated TISA in non-skull CT slices, which can be exploited to develop a correction strategy.« less
SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Elder, E; Roper, J
2015-06-15
Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
DYCAST: A finite element program for the crash analysis of structures
NASA Technical Reports Server (NTRS)
Pifko, A. B.; Winter, R.; Ogilvie, P.
1987-01-01
DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.
Prefabricated Roof Beams for Hardened Shelters
1993-08-01
beam with a composite concrete slab. Based on the results of the concept evaluation, a test program was designed and conducted to validate the steel...ultimaw, strength. The results of these tests showed that the design procedure accurately predicts the response of the ste,-confined concrete composite...BENDING OF EXTERNALLY REINFORCED CONCRETE BEAMS ........ 67 TABLE 9. SINGLE POINT LOAD BEAM TEST RESULTS
A new approach for beam hardening correction based on the local spectrum distributions
NASA Astrophysics Data System (ADS)
Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza
2015-09-01
Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.
Rega, Giuseppe
2016-01-01
The nonlinear free oscillations of a straight planar Timoshenko beam are investigated analytically by means of the asymptotic development method. Attention is focused for the first time, to the best of our knowledge, on the nonlinear coupling between the axial and the transversal oscillations of the beam, which are decoupled in the linear regime. The existence of coupled and uncoupled motion is discussed. Furthermore, the softening versus hardening nature of the backbone curves is investigated in depth. The results are summarized by means of behaviour charts that illustrate the different possible classes of motion in the parameter space. New, and partially unexpected, phenomena, such as the changing of the nonlinear behaviour from softening to hardening by adding/removing the axial vibrations, are highlighted. PMID:27436974
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y; Wu, S; Qi, H
Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCCmore » profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter>Geometry>Beam hardening>Lag>Noise>Artifact-free in dental CBCT.« less
Compression of pulsed electron beams for material tests
NASA Astrophysics Data System (ADS)
Metel, Alexander S.
2018-03-01
In order to strengthen the surface of machine parts and investigate behavior of their materials exposed to highly dense energy fluxes an electron gun has been developed, which produces the pulsed beams of electrons with the energy up to 300 keV and the current up to 250 A at the pulse width of 100-200 µs. Electrons are extracted into the accelerating gap from the hollow cathode glow discharge plasma through a flat or a spherical grid. The flat grid produces 16-cm-diameter beams with the density of transported per one pulse energy not exceeding 15 J·cm-2, which is not enough even for the surface hardening. The spherical grid enables compression of the beams and regulation of the energy density from 15 J·cm-2 up to 15 kJ·cm-2, thus allowing hardening, pulsed melting of the machine part surface with the further high-speed recrystallization as well as an explosive ablation of the surface layer.
Fiber-Reinforced Concrete For Hardened Shelter Construction
1993-02-01
reduced cost and weight versus the symmetrically rebar reinforced beam design using normal-weight, standard-strength concrete currently used by the...while possibly reducing their cost and weight. Emphasis is placed on modular construction using prefabricated fiber- and rebar -reinforced concrete ...fiber- and rebar -reinforced concrete structural members into U.S. Air Force hardened structure designs. vii (The reverse of this page is blank) PREFACE
Impact of Scaled Technology on Radiation Testing and Hardening
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.; Cohn, Lewis M.
2005-01-01
This presentation gives a brief overview of some of the radiation challenges facing emerging scaled digital technologies with implications on using consumer grade electronics and next generation hardening schemes. Commercial semiconductor manufacturers are recognizing some of these issues as issues for terrestrial performance. Looking at means of dealing with soft errors. The thinned oxide has indicated improved TID tolerance of commercial products hardened by "serendipity" which does not guarantee hardness or say if the trend will continue. This presentation also focuses one reliability implications of thinned oxides.
Iterative CT shading correction with no prior information
NASA Astrophysics Data System (ADS)
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.
Residual aneurysm after metal coils treatment detected by spectral CT
Wang, Yang; Gao, Xiaolei; Lu, Aixun; Zhou, Zhengyang; Li, Baoxin
2012-01-01
Digital subtraction angiography (DSA) is currently the gold standard for diagnosing the residue or recurrence of aneurysm after treatment, especially in the presence of metal coils. However, DSA is an invasive procedure which may cause additional trauma and economic burden to patients. Spectral CT imaging, as a newly introduced CT imaging mode, produces monochromatic image sets that is able to reduce beam-hardening and other metal-related artifacts, and has found its use in several clinical applications including brain imaging to reduce beam-hardening artifacts. In this study, we describe a case of spectral CT imaging in follow-up of the metal coils treatment and detection of a small leaf of residual aneurysm after metal coils treatment. PMID:23256074
Rodriguez-Granillo, Gaston A; Carrascosa, Patricia; Cipriano, Silvina; de Zan, Macarena; Deviggiano, Alejandro; Capunay, Carlos; Cury, Ricardo C
2015-01-01
The assessment of myocardial perfusion using single-energy (SE) imaging is influenced by beam-hardening artifacts (BHA). We sought to explore the ability of dual-energy (DE) imaging to attenuate the presence of BHA. Myocardial signal density (SD) was evaluated in 2240 myocardial segments (112 for each energy level) and in 320 American Heart Association segments among the SE group. Compared to DE reconstructions at the best energy level, SE acquisitions showed no significant differences overall regarding myocardial SD or signal-to-noise ratio. The segments most commonly affected by BHA showed significantly lower myocardial SD at the lowest energy levels, progressively normalizing at higher energy levels. Copyright © 2015 Elsevier Inc. All rights reserved.
Strengthening and repair of RC beams with sugarcane bagasse fiber reinforced cement mortar
NASA Astrophysics Data System (ADS)
Syamir Senin, Mohamad; Shahidan, Shahiron; Maarof, M. Z. Md; Syazani Leman, Alif; Zuki, S. S. Mohd; Azmi, M. A. Mohammad
2017-11-01
The use of a jacket made of fiber reinforced cement mortar with tensile hardening behaviour for strengthening RC beams was investigated in this study. A full-scale test was conducted on beams measuring 1000mm in length. A 25mm jacket was directly applied to the surface of the beams to test its ability to repair and strengthen the beams. The beams were initially damaged and eventually repaired. Three types of beams which included unrepaired beams, beams repaired with normal mortar jacket and beams repaired with 10% sugarcane bagasse fiber mortar jacket were studied. The jacket containing 10% of sugarcane bagasse fiber enhanced the flexural strength of the beams.
NASA Astrophysics Data System (ADS)
Hubert, Christian; Voss, Kay Obbe; Bender, Markus; Kupka, Katharina; Romanenko, Anton; Severin, Daniel; Trautmann, Christina; Tomut, Marilena
2015-12-01
Due to its excellent thermo-physical properties and radiation hardness, isotropic graphite is presently the most promising material candidate for new high-power ion accelerators which will provide highest beam intensities and energies. Under these extreme conditions, specific accelerator components including production targets and beam protection modules are facing the risk of degradation due to radiation damage. Ion-beam induced damage effects were tested by irradiating polycrystalline, isotropic graphite samples at the UNILAC (GSI, Darmstadt) with 4.8 MeV per nucleon 132Xe, 150Sm, 197Au, and 238U ions applying fluences between 1 × 1011 and 1 × 1014 ions/cm2. The overall damage accumulation and its dependence on energy loss of the ions were studied by in situ 4-point resistivity measurements. With increasing fluence, the electric resistivity increases due to disordering of the graphitic structure. Irradiated samples were also analyzed off-line by means of micro-indentation in order to characterize mesoscale effects such as beam-induced hardening and stress fields within the specimen. With increasing fluence and energy loss, hardening becomes more pronounced.
NASA Astrophysics Data System (ADS)
Ahmad, M.; Ali, G.; Ahmed, Ejaz; Haq, M. A.; Akhter, J. I.
2011-06-01
Electron beam melting is being used to modify the microstructure of the surfaces of materials due to its ability to cause localized melting and supercooling of the melt. This article presents an experimental study on the surface modification of Ni-based superalloy (Inconel 625) reinforced with SiC ceramic particles under electron beam melting. Scanning electron microscopy, energy dispersive spectroscopy and X-ray diffraction techniques have been applied to characterize the resulted microstructure. The results revealed growth of novel structures like wire, rod, tubular, pyramid, bamboo and tweezers type morphologies in the modified surface. In addition to that fibrous like structure was also observed. Formation of thin carbon sheet has been found at the regions of decomposed SiC. Electron beam modified surface of Inconel 625 alloy has been hardened twice as compared to the as-received samples. Surface hardening effect may be attributed to both the formation of the novel structures as well as the introduction of Si and C atom in the lattice of Inconel 625 alloy.
2D beam hardening correction for micro-CT of immersed hard tissue
NASA Astrophysics Data System (ADS)
Davis, Graham; Mills, David
2016-10-01
Beam hardening artefacts arise in tomography and microtomography with polychromatic sources. Typically, specimens appear to be less dense in the center of reconstructions because as the path length through the specimen increases, so the X-ray spectrum is shifted towards higher energies due to the preferential absorption of low energy photons. Various approaches have been taken to reduce or correct for these artefacts. Pre-filtering the X-ray beam with a thin metal sheet will reduce soft energy X-rays and thus narrow the spectrum. Correction curves can be applied to the projections prior to reconstruction which transform measured attenuation with polychromatic radiation to predicted attenuation with monochromatic radiation. These correction curves can be manually selected, iteratively derived from reconstructions (this generally works where density is assumed to be constant) or derived from a priori information about the X-ray spectrum and specimen composition. For hard tissue specimens, the latter approach works well if the composition is reasonably homogeneous. In the case of an immersed or embedded specimen (e.g., tooth or bone) the relative proportions of mineral and "organic" (including medium and plastic container) species varies considerably for different ray paths and simple beam hardening correction does not give accurate results. By performing an initial reconstruction, the total path length through the container can be determined. By modelling the X-ray properties of the specimen, a 2D correction transform can then be created such that the predicted monochromatic attenuation can be derived as a function of both the measured polychromatic attenuation and the container path length.
Fine-Scale Volume Heterogeneity in a Mixed Sand/Mud Sediment Off Fort Walton Beach, FL
2010-07-01
by Vaughan et al. [4]. Subsequent to the mud drape, wind-wave activity mobilized sediment and some of the mud layer was resuspended, and sand from...hardening effects, which is a common issue with polychromatic energy sources, such as the HD-500 and medical CT systems. Beam hardening is a process...provides a convenient characterization of levels of heterogeneity. The CV is defined as the standard devi - ation divided by the mean and multiplied by
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, J; Owrangi, A; Grigorov, G
Purpose: This study investigates the spectra of surface photon energy and energy fluence in the bone heterogeneity and beam obliquity using flattened and unflattened photon beams. The spectra were calculated in a bone and water phantom using Monte Carlo simulation (the EGSnrc code). Methods: Spectra of energy, energy fluence and mean energy of the 6 MV flattened and unflattened photon beams (field size = 10 × 10 cm{sup 2}) produced by a Varian TrueBEAM linear accelerator were calculated at the surfaces of a bone and water phantom using Monte Carlo simulations. The spectral calculations were repeated with the beam anglesmore » turned from 0° to 15°, 30° and 45° in the phantoms. Results: It is found that the unflattened photon beams contained more photons in the low-energy range of 0 – 2 MeV than the flattened beams with a flattening filter. Compared to the water phantom, both the flattened and unflattened beams had slightly less photons in the energy range < 0.4 MeV when a bone layer of 1 cm is present under the phantom surface. This shows that the presence of the bone decreased the low-energy photons backscattered to the phantom surface. When the photon beams were rotated from 0° to 45°, the number of photon and mean photon energy increased with the beam angle. This is because both the flattened and unflattened beams became more hardened when the beam angle increased. With the bone heterogeneity, the mean energies of both photon beams increased correspondingly. This is due to the absorption of low-energy photons by the bone, resulting in more significant beam hardening. Conclusion: The photon spectral information is important in studies on the patient’s surface dose enhancement when using unflattened photon beams in radiotherapy.« less
WE-E-18C-01: Multi-Energy CT: Current Status and Recent Innovations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelc, N; McCollough, C; Yu, L
2014-06-15
Conventional computed tomography (CT) uses a single polychromatic x-ray spectrum and energy integrating detectors, and produces images whose contrast depends on the effective attenuation coefficient of the broad spectrum beam. This can introduce errors from beam hardening and does not produce the optimal contrast-to-noise ratio. In addition, multiple materials can have the same effective attenuation coefficient, causing different materials to be indistinguishable in conventional CT images. If transmission measurements at two or more energies are obtained, even with polychromatic beams, more specific information about the object can be obtained. If the object does not contain materials with k-edges in themore » spectrum, the x-ray attenuation can be well-approximated by a linear combination of two processes (photoelectric absorption and Compton scattering) or, equivalently, two basis materials. For such cases, two spectral measurements suffice, although additional measurements can provide higher precision. If K-edge materials are present, additional spectral measurements can allow these materials to be isolated. Current commercial implementations use varied approaches, including two sources operating a different kVp, one source whose kVp is rapidly switched in a single scan, and a dual layer detector that can provide spectral information in every reading. Processing of the spectral information can be performed in the raw data domain or in the image domain. The process of calculating the amount of the two basis functions implicitly corrects for beam hardening and therefore can lead to improvements in quantitative accuracy. Information can be extracted to provide material specific information beyond that of conventional CT. This additional information has been shown to be important in several clinical applications, and can also lead to more efficient clinical protocols. Recent innovations in x-ray sources, detectors, and systems have made multi-energy CT much more practical and improved its performance. In addition, this is a very active area of research and further improvements are expected through further technological improvements. Learning Objectives: Basic principles of multi-energy CT Current implementations of mutli-energy CT Data and image analysis methods in multi-energy CT Current clinical applications of dual energy CT5. recent innovations and anticipated advances in multi-energy CT.« less
Leong, David L; Rainford, Louise; Zhao, Wei; Brennan, Patrick C
2016-01-01
In the course of performance acceptance testing, benchmarking or quality control of X-ray imaging systems, it is sometimes necessary to harden the X-ray beam spectrum. IEC 61267 specifies materials and methods to accomplish beam hardening and, unfortunately, requires the use of 99.9% pure aluminium (Alloy 1190) for the RQA beam quality, which is expensive and difficult to obtain. Less expensive and more readily available filters, such as Alloy 1100 (99.0% pure) aluminium and copper/aluminium combinations, have been used clinically to produce RQA series without rigorous scientific investigation to support their use. In this paper, simulation and experimental methods are developed to determine the differences in beam quality using Alloy 1190 and Alloy 1100. Additional simulation investigated copper/aluminium combinations to produce RQA5 and outputs from this simulation are verified with laboratory tests using different filter samples. The results of the study demonstrate that although Alloy 1100 produces a harder beam spectrum compared to Alloy 1190, it is a reasonable substitute. A combination filter of 0.5 mm copper and 2 mm aluminium produced a spectrum closer to that of Alloy 1190 than Alloy 1100 with the added benefits of lower exposures and lower batch variability. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Zink, F E; McCollough, C H
1994-08-01
The unique geometry of electron-beam CT (EBCT) scanners produces radiation dose profiles with widths which can be considerably different from the corresponding nominal scan width. Additionally, EBCT scanners produce both complex (multiple-slice) and narrow (3 mm) radiation profiles. This work describes the measurement of the axial dose distribution from EBCT within a scattering phantom using film dosimetry methods, which offer increased convenience and spatial resolution compared to thermoluminescent dosimetry (TLD) techniques. Therapy localization film was cut into 8 x 220 mm strips and placed within specially constructed light-tight holders for placement within the cavities of a CT Dose Index (CTDI) phantom. The film was calibrated using a conventional overhead x-ray tube with spectral characteristics matched to the EBCT scanner (130 kVp, 10 mm A1 HVL). The films were digitized at five samples per mm and calibrated dose profiles plotted as a function of z-axis position. Errors due to angle-of-incidence and beam hardening were estimated to be less than 5% and 10%, respectively. The integral exposure under film dose profiles agreed with ion-chamber measurements to within 15%. Exposures measured along the radiation profile differed from TLD measurements by an average of 5%. The film technique provided acceptable accuracy and convenience in comparison to conventional TLD methods, and allowed high spatial-resolution measurement of EBCT radiation dose profiles.
NASA Astrophysics Data System (ADS)
Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten
2016-08-01
A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S; Chao, C; Columbia University, NY, NY
2014-06-01
Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less
Efficient production by laser materials processing integrated into metal cutting machines
NASA Astrophysics Data System (ADS)
Wiedmaier, M.; Meiners, E.; Dausinger, Friedrich; Huegel, Helmut
1994-09-01
Beam guidance of high power YAG-laser (cw, pulsed, Q-switched) with average powers up to 2000 W by flexible glass fibers facilitates the integration of the laser beam as an additional tool into metal cutting machines. Hence, technologies like laser cutting, joining, hardening, caving, structuring of surfaces and laser-marking can be applied directly inside machining centers in one setting, thereby reducing the flow of workpieces resulting in a lowering of costs and production time. Furthermore, materials with restricted machinability--especially hard materials like ceramics, hard metals or sintered alloys--can be shaped by laser-caving or laser assisted machining. Altogether, the flexibility of laser integrated machining centers is substantially increased or the efficiency of a production line is raised by time-savings or extended feasibilities with techniques like hardening, welding or caving.
Recent developments in the MuCAT microtomography facility
NASA Astrophysics Data System (ADS)
Davis, Graham R.; Evershed, Anthony N. Z.; Mills, David
2012-10-01
The goal of the MuCAT scanner development at Queen Mary University of London is to provide highly accurate maps of a specimen's X-ray linear attenuation coefficient; speed of data acquisition and spatial resolution having a lower priority. The reason for this approach is that the primary application is to accurately map the mineral concentration in teeth. Synchrotron tomography would generally be considered more appropriate for such a task, but many of the dental applications involve repeated scans with long intervening periods (from hours to weeks) and the management of synchrotron facilities does not readily allow such research. Development work is concentrated in two areas: beam hardening correction algorithms and novel scanning methodology. Beam hardening correction is combined with calibration, such that the raw X-ray projection data is corrected for beam hardening prior to reconstruction. Recent developments include the design of a multi-element calibration carousel. This has nine calibration pieces, five aluminium, three titanium and one copper. Development of the modelling algorithm is also yielding improved accuracy. A time-delay integration CCD camera is used to avoid ring artefacts. The original prototype averaged out inhomogeneities in both the detector array and the X-ray field; later designs used only software correction for the latter. However, at lower X-ray energies, the effect of deposits on the X-ray window (for example) becomes more conspicuous and so a new scanning methodology has been designed whereby the specimen moves in an arc about the source and equiangular data is acquired, thus overcoming this problem.
Megavoltage cargo radiography with dual energy material decomposition
NASA Astrophysics Data System (ADS)
Shikhaliev, Polad M.
2018-02-01
Megavoltage (MV) radiography has important applications in imaging large cargos for detecting illicit materials. A useful feature of MV radiography is the possibility of decomposing and quantifying materials with different atomic numbers. This can be achieved by imaging cargo at two different X-ray energies, or dual energy (DE) radiography. The performance of both single energy and DE radiography depends on beam energy, beam filtration, radiation dose, object size, and object content. The purpose of this work was to perform comprehensive qualitative and quantitative investigations of the image quality in MV radiography depending on the above parameters. A digital phantom was designed including Fe background with thicknesses of 2cm, 6cm, and 18cm, and materials samples of Polyethylene, Fe, Pb, and U. The single energy images were generated at x-ray beam energies 3.5MV, 6MV, and 9MV. The DE material decomposed images were generated using interlaced low and high energy beams 3.5/6MV and 6/9MV. The X-ray beams were filtered by low-Z (Polyethylene) and high-Z (Pb) filters with variable thicknesses. The radiation output of the accelerator was kept constant for all beam energies. The image quality metrics was signal-to-noise ratio (SNR) of the particular sample over a particular background. It was found that the SNR depends on the above parameters in a complex way, but can be optimized by selecting a particular set of parameters. For some imaging setups increased filter thicknesses, while strongly absorbing the beams, increased the SNR of material decomposed images. Beam hardening due to polyenergetic x-ray spectra resulted in material decomposition errors, but this could be addressed using region of interest decomposition. It was shown that it is not feasible to separate the materials with close atomic numbers using the DE method. Particularly, Pb and U were difficult to decompose, at least at the dose levels allowed by radiation source and safety requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng
Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less
Wang, Hui; Xu, Yanan; Shi, Hongli
2018-03-15
Metal artifacts severely degrade CT image quality in clinical diagnosis, which are difficult to removed, especially for the beam hardening artifacts. The metal artifact reduction (MAR) based on prior images are the most frequently-used methods. However, there exists a lot misclassification in most prior images caused by absence of prior information such as spectrum distribution of X-ray beam source, especially when multiple or big metal are included. This work aims is to identify a more accurate prior image to improve image quality. The proposed method includes four steps. First, the metal image is segmented by thresholding an initial image, where the metal traces are identified in the initial projection data using the forward projection of the metal image. Second, the accurate absorbent model of certain metal image is calculated according to the spectrum distribution of certain X-ray beam source and energy-dependent attenuation coefficients of metal. Third, a new metal image is reconstructed by the general analytical reconstruction algorithm such as filtered back projection (FPB). The prior image is obtained by segmenting the difference image between the initial image and the new metal image into air, tissue and bone. Fourth, the initial projection data are normalized by dividing the projection data of prior image pixel to pixel. The final corrected image is obtained by interpolation, denormalization and reconstruction. Several clinical images with dental fillings and knee prostheses were used to evaluate the proposed algorithm and normalized metal artifact reduction (NMAR) and linear interpolation (LI) method. The results demonstrate the artifacts were reduced efficiently by the proposed method. The proposed method could obtain an exact prior image using the prior information about X-ray beam source and energy-dependent attenuation coefficients of metal. As a result, better performance of reducing beam hardening artifacts can be achieved. Moreover, the process of the proposed method is rather simple and little extra calculation burden is necessary. It has superiorities over other algorithms when include multiple and/or big implants.
NASA Astrophysics Data System (ADS)
Lander, Michael L.
2003-05-01
The Laser Hardened Materials Evaluation Laboratory (LHMEL) has been characterizing material responses to laser energy in support of national defense programs and the aerospace industry for the past 26 years. This paper reviews the overall resources available at LHMEL to support fundamental materials testing relating to impulse coupling measurement and to explore beamed energy launch concepts. Located at Wright-Patterson Air Force Base, Ohio, LHMEL is managed by the Air Force Research Laboratory Materials Directorate AFRL/MLPJ and operated by Anteon Corporation. The facility's advanced hardware is centered around carbon dioxide lasers producing output power up to 135kW and neodymium glass lasers producing up to 10 kilojoules of repetitively pulsed output. The specific capabilities of each laser device and related optical systems are discussed. Materials testing capabilities coupled with the laser systems are also described including laser output and test specimen response diagnostics. Environmental simulation capabilities including wind tunnels and large-volume vacuum chambers relevant to beamed energy propulsion are also discussed. This paper concludes with a summary of the procedures and methods by which the facility can be accessed.
NASA Technical Reports Server (NTRS)
Tasca, D. M.
1981-01-01
Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.
SU-F-I-41: Calibration-Free Material Decomposition for Dual-Energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Xing, L; Zhang, Q
2016-06-15
Purpose: To eliminate tedious phantom calibration or manually region of interest (ROI) selection as required in dual-energy CT material decomposition, we establish a new projection-domain material decomposition framework with incorporation of energy spectrum. Methods: Similar to the case of dual-energy CT, the integral of the basis material image in our model is expressed as a linear combination of basis functions, which are the polynomials of high- and low-energy raw projection data. To yield the unknown coefficients of the linear combination, the proposed algorithm minimizes the quadratic error between the high- and low-energy raw projection data and the projection calculated usingmore » material images. We evaluate the algorithm with an iodine concentration numerical phantom at different dose and iodine concentration levels. The x-ray energy spectra of the high and low energy are estimated using an indirect transmission method. The derived monochromatic images are compared with the high- and low-energy CT images to demonstrate beam hardening artifacts reduction. Quantitative results were measured and compared to the true values. Results: The differences between the true density value used for simulation and that were obtained from the monochromatic images, are 1.8%, 1.3%, 2.3%, and 2.9% for the dose levels from standard dose to 1/8 dose, and are 0.4%, 0.7%, 1.5%, and 1.8% for the four iodine concentration levels from 6 mg/mL to 24 mg/mL. For all of the cases, beam hardening artifacts, especially streaks shown between dense inserts, are almost completely removed in the monochromatic images. Conclusion: The proposed algorithm provides an effective way to yield material images and artifacts-free monochromatic images at different dose levels without the need for phantom calibration or ROI selection. Furthermore, the approach also yields accurate results when the concentration of the iodine concentrate insert is very low, suggesting the algorithm is robust with respect to the low-contrast scenario.« less
Proton irradiation damage of an annealed Alloy 718 beam window
Bach, H. T.; Anderoglu, O.; Saleh, T. A.; ...
2015-04-01
Mechanical testing and microstructural analysis was performed on an Alloy 718 window that was in use at the Los Alamos Neutron Science Center (LANSCE) Isotope Production Facility (IPF) for approximately 5 years. It was replaced as part of the IPF preventive maintenance program. The window was transported to the Wing 9 hot cells at the Chemical and Metallurgical Research (CMR) LANL facility, visually inspected and 3-mm diameter samples were trepanned from the window for mechanical testing and microstructural analysis. Shear punch testing and optical metallography was performed at the CMR hot cells. The 1-mm diameter shear punch disks were cutmore » into smaller samples to further reduce radiation exposure dose rate using Focus Ion Beam (FIB) and microstructure changes were analyzed using a Transmission Electron Microscopy (TEM). Irradiation doses were determined to be ~0.2–0.7 dpa (edge) to 11.3 dpa (peak of beam intensity) using autoradiography and MCNPX calculations. The corresponding irradiation temperatures were calculated to be ~34–120 °C with short excursion to be ~47–220 °C using ANSYS. Mechanical properties and microstructure analysis results with respect to calculated dpa and temperatures show that significant work hardening occurs but useful ductility still remains. The hardening in the lowest dose region (~0.2–0.7 dpa) was the highest and attributed to the formation of γ" precipitates and irradiation defect clusters/bubbles whereas the hardening in the highest dose region (~11.3 dpa) was lower and attributed mainly to irradiation defect clusters and some thermal annealing.« less
NASA Astrophysics Data System (ADS)
Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.
2017-11-01
Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.
Technology, design, simulation, and evaluation for SEP-hardened circuits
NASA Technical Reports Server (NTRS)
Adams, J. R.; Allred, D.; Barry, M.; Rudeck, P.; Woodruff, R.; Hoekstra, J.; Gardner, H.
1991-01-01
This paper describes the technology, design, simulation, and evaluation for improvement of the Single Event Phenomena (SEP) hardness of gate-array and SRAM cells. Through the use of design and processing techniques, it is possible to achieve an SEP error rate less than 1.0 x 10(exp -10) errors/bit-day for a 9O percent worst-case geosynchronous orbit environment.
Radiation effects in advanced microelectronics technologies
NASA Astrophysics Data System (ADS)
Johnston, A. H.
1998-06-01
The pace of device scaling has increased rapidly in recent years. Experimental CMOS devices have been produced with feature sizes below 0.1 /spl mu/m, demonstrating that devices with feature sizes between 0.1 and 0.25 /spl mu/m will likely be available in mainstream technologies after the year 2000. This paper discusses how the anticipated changes in device dimensions and design are likely to affect their radiation response in space environments. Traditional problems, such as total dose effects, SEU and latchup are discussed, along with new phenomena. The latter include hard errors from heavy ions (microdose and gate-rupture errors), and complex failure modes related to advanced circuit architecture. The main focus of the paper is on commercial devices, which are displacing hardened device technologies in many space applications. However, the impact of device scaling on hardened devices is also discussed.
Ion implantation method for preparing polymers having oxygen erosion resistant surfaces
Lee, Eal H.; Mansur, Louis K.; Heatherly, Jr., Lee
1995-01-01
Hard surfaced polymers and the method for making them are generally described. Polymers are subjected to simultaneous multiple ion beam bombardment, that results in a hardening of the surface, improved wear resistance, and improved oxygen erosion resistance.
WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Niu, T
2015-06-15
Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less
Method for beam hardening correction in quantitative computed X-ray tomography
NASA Technical Reports Server (NTRS)
Yan, Chye Hwang (Inventor); Whalen, Robert T. (Inventor); Napel, Sandy (Inventor)
2001-01-01
Each voxel is assumed to contain exactly two distinct materials, with the volume fraction of each material being iteratively calculated. According to the method, the spectrum of the X-ray beam must be known, and the attenuation spectra of the materials in the object must be known, and be monotonically decreasing with increasing X-ray photon energy. Then, a volume fraction is estimated for the voxel, and the spectrum is iteratively calculated.
NASA Astrophysics Data System (ADS)
Meserve, Justin
Cold drawn AISI 4140 beams were LASER surface hardened with a 2 kW CO2 LASER. Specimens were treated in the free state and while restrained in a bending fixture inducing surface tensile stresses of 94 and 230 MPa. Knoop hardness indentation was used to evaluate the through thickness hardness distribution, and a layer removal methodology was used to evaluate the residual stress distribution. Results showed the maximum surface hardness attained was not affected by pre-stress during hardening, and ranged from 513 to 676 kg/mm2. The depth of effective hardening varied at different magnitudes of pre-stress, but did not vary proportionately to the pre-stress. The surface residual stress, coinciding with the maximum compressive residual stress, increased as pre-stress was increased, from 1040 MPa for the nominally treated specimens to 1270 MPa for specimens pre-stressed to 230 MPa. The maximum tensile residual stress observed in the specimens decreased from 1060 MPa in the nominally treated specimens to 760 MPa for specimens pre-stressed to 230 MPa. Similarly, thickness of the compressive residual stress region increased and the depth at which maximum tensile residual stress occurred increased as the pre-stress during treatment was increased Overall, application of tensile elastic pre-stress during LASER hardening is beneficial to the development of compressive residual stress in AISI 4140, with minimal impact to the hardness attained from the treatment. The newly developed approach for LASER hardening may support efforts to increase both the wear and fatigue resistance of parts made from hardenable steels.
A compact x-ray system for two-phase flow measurement
NASA Astrophysics Data System (ADS)
Song, Kyle; Liu, Yang
2018-02-01
In this paper, a compact x-ray densitometry system consisting of a 50 kV, 1 mA x-ray tube and several linear detector arrays is developed for two-phase flow measurement. The system is capable of measuring void fraction and velocity distributions with a spatial resolution of 0.4 mm per pixel and a frequency of 1000 Hz. A novel measurement model has been established for the system which takes account of the energy spectrum of x-ray photons and the beam hardening effect. An improved measurement accuracy has been achieved with this model compared with the conventional log model that has been widely used in the literature. Using this system, void fraction and velocity distributions are measured for a bubbly and a slug flow in a 25.4 mm I.D. air-water two-phase flow test loop. The measured superficial gas velocities show an error within ±4% when compared with the gas flowmeter for both conditions.
Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
2000-01-01
This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.
The effects of error augmentation on learning to walk on a narrow balance beam.
Domingo, Antoinette; Ferris, Daniel P
2010-10-01
Error augmentation during training has been proposed as a means to facilitate motor learning due to the human nervous system's reliance on performance errors to shape motor commands. We studied the effects of error augmentation on short-term learning of walking on a balance beam to determine whether it had beneficial effects on motor performance. Four groups of able-bodied subjects walked on a treadmill-mounted balance beam (2.5-cm wide) before and after 30 min of training. During training, two groups walked on the beam with a destabilization device that augmented error (Medium and High Destabilization groups). A third group walked on a narrower beam (1.27-cm) to augment error (Narrow). The fourth group practiced walking on the 2.5-cm balance beam (Wide). Subjects in the Wide group had significantly greater improvements after training than the error augmentation groups. The High Destabilization group had significantly less performance gains than the Narrow group in spite of similar failures per minute during training. In a follow-up experiment, a fifth group of subjects (Assisted) practiced with a device that greatly reduced catastrophic errors (i.e., stepping off the beam) but maintained similar pelvic movement variability. Performance gains were significantly greater in the Wide group than the Assisted group, indicating that catastrophic errors were important for short-term learning. We conclude that increasing errors during practice via destabilization and a narrower balance beam did not improve short-term learning of beam walking. In addition, the presence of qualitatively catastrophic errors seems to improve short-term learning of walking balance.
Aliasing errors in measurements of beam position and ellipticity
NASA Astrophysics Data System (ADS)
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Ion implantation method for preparing polymers having oxygen erosion resistant surfaces
Lee, E.H.; Mansur, L.K.; Heatherly, L. Jr.
1995-04-18
Hard surfaced polymers and the method for making them are generally described. Polymers are subjected to simultaneous multiple ion beam bombardment, that results in a hardening of the surface, improved wear resistance, and improved oxygen erosion resistance. 8 figs.
Casar, Bozidar; Pasler, Marlies; Wegener, Sonja; Hoffman, David; Talamonti, Cinzia; Qian, Jianguo; Mendez, Ignasi; Brojan, Denis; Perrin, Bruce; Kusters, Martijn; Canters, Richard; Pallotta, Stefania; Peterlin, Primoz
2017-09-01
The influence of the Integral Quality Monitor (IQM) transmission detector on photon beam properties was evaluated in a preclinical phase, using data from nine participating centres: (i) the change of beam quality (beam hardening), (ii) the influence on surface dose, and (iii) the attenuation of the IQM detector. For 6 different nominal photon energies (4 standard, 2 FFF) and square field sizes from 1×1cm 2 to 20×20cm 2 , the effect of IQM on beam quality was assessed from the PDD 20,10 values obtained from the percentage dose depth (PDD) curves, measured with and without IQM in the beam path. The change in surface dose with/without IQM was assessed for all available energies and field sizes from 4×4cm 2 to 20×20cm 2 . The transmission factor was calculated by means of measured absorbed dose at 10cm depth for all available energies and field sizes. (i) A small (0.11-0.53%) yet statistically significant beam hardening effect was observed, depending on photon beam energy. (ii) The increase in surface dose correlated with field size (p<0.01) for all photon energies except for 18MV. The change in surface dose was smaller than 3.3% in all cases except for the 20×20cm 2 field and 10MV FFF beam, where it reached 8.1%. (iii) For standard beams, transmission of the IQM showed a weak dependence on the field size, and a pronounced dependence on the beam energy (0.9412 for 6MV to 0.9578 for 18MV and 0.9440 for 6MV FFF; 0.9533 for 10MV FFF). The effects of the IQM detector on photon beam properties were found to be small yet statistically significant. The magnitudes of changes which were found justify treating IQM either as tray factors within the treatment planning system (TPS) for a particular energy or alternatively as modified outputs for specific beam energy of linear accelerators, which eases the introduction of the IQM into clinical practice. Copyright © 2017. Published by Elsevier GmbH.
Radiation Production by Charged Particle Beams Ejected from a Plasma Focus.
1981-02-01
The scope of this investigation concerns the development of a pulsed radiation source using the charged particle beam ejected from a plasma focus device...satellite components for radiation hardening and survivability. The plasma focus is operated in a modified geometry such that electron bursts which...a radiation facility. The plasma focus , identified as the Mark IV, is nominally rated at 34 kJ with a capacitance of 168 micro F at 20 kV. The
Robar, James L; Connell, Tanner; Huang, Weihong; Kelly, Robin G
2009-09-01
The purpose of this study is to investigate the improvement of megavoltage planar and cone-beam CT (CBCT) image quality with the use of low atomic number (Z) external targets in the linear accelerator. In this investigation, two experimental megavoltage imaging beams were generated by using either 3.5 or 7.0 MeV electrons incident on aluminum targets installed above the level of the carousel in a linear accelerator (2100EX, Varian Medical, Inc., Palo Alto, CA). Images were acquired using an amorphous silicon detector panel. Contrast-to-noise ratio (CNR) in planar and CBCT images was measured as a function of dose and a comparison was made between the imaging beams and the standard 6 MV therapy beam. Phantoms of variable diameter were used to examine the loss of contrast due to beam hardening. Porcine imaging was conducted to examine qualitatively the advantages of the low-Z target approach in CBCT. In CBCT imaging CNR increases by factors as high as 2.4 and 4.3 for the 7.0 and 3.5 MeV/Al beams, respectively, compared to images acquired with 6 MV. Similar factors of improvement are observed in planar imaging. For the imaging beams, beam hardening causes a significant loss of the contrast advantage with increasing phantom diameter; however, for the 3.5 MeV/Al beam and a phantom diameter of 25 cm, a contrast advantage remains, with increases of contrast by factors of 1.5 and 3.4 over 6 MV for bone and lung inhale regions, respectively. The spatial resolution is improved slightly in CBCT images for the imaging beams. CBCT images of a porcine cranium demonstrate qualitatively the advantages of the low-Z target approach, showing greater contrast between tissues and improved visibility of fine detail. The use of low-Z external targets in the linear accelerator improves megavoltage planar and CBCT image quality significantly. CNR may be increased by a factor of 4 or greater. Improvement of the spatial resolution is also apparent.
Tetraglycidyl epoxy resins and graphite fiber composites cured with flexibilized aromatic diamines
NASA Technical Reports Server (NTRS)
Delvigs, P.
1986-01-01
Studies were performed to synthesize new ether modified, flexibilized aromatic diamine hardeners for curing epoxy resins. The effect of moisture absorption on the glass transition temperatures of a tetraglycidyl epoxy, MY 720, cured with flexibilized hardeners and a conventional aromatic diamine was studied. Unidirectional composites, using epoxy-sized Celion 6000 graphite fiber as the reinforcement, were fabricated. The room temperature and 300 F mechanical properties of the composites, before and after moisture exposure, were determined. The Mode I interlaminar fracture toughness of the composites was characterized using a double cantilever beam technique to calculate the critical strain energy release rate.
Bae, Youngchul
2016-05-23
An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision.
Bae, Youngchul
2016-01-01
An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision. PMID:27223291
High Birefringence Liquid Crystals for Laser Hardening and IR Countermeasure
2004-09-24
A fast-switching and scattering-free phase modulator using polymer network liquid crystal ( PNLC ) is demonstrated at **=l.55 um for laser beam...steering application. The strong polymer network anchoring greatly reduces the visco-elastic coefficient of the liquid crystal. As a result, the PNLC
Space Qualified High Speed Reed Solomon Encoder
NASA Technical Reports Server (NTRS)
Gambles, Jody W.; Winkert, Tom
1993-01-01
This paper reports a Class S CCSDS recommendation Reed Solomon encoder circuit baselined for several NASA programs. The chip is fabricated using United Technologies Microelectronics Center's UTE-R radiation-hardened gate array family, contains 64,000 p-n transistor pairs, and operates at a sustained output data rate of 200 MBits/s. The chip features a pin selectable message interleave depth of from 1 to 8 and supports output block lengths of 33 to 255 bytes. The UTE-R process is reported to produce parts that are radiation hardened to 16 Rads (Si) total dose and 1.0(exp -10) errors/bit-day.
Nitriding of Polymer by Low Energy Nitrogen Neutral Beam Source
NASA Astrophysics Data System (ADS)
Hara, Yasuhiro; Takeda, Keigo; Yamakawa, Koji; Den, Shoji; Toyoda, Hirotaka; Sekine, Makoto; Hori, Masaru
2012-03-01
Nitriding of polyethylene naphthalate (PEN) has been carried out at room temperature using a nitrogen neutral beam with kinetic energy of less than 100 eV. The surface hardness of nitrided samples increased to two times that of the untreated sample, when the acceleration voltage was between 30 and 50 V. The thickness of the hardened polymer layer was estimated to be 1 µm. It was concluded that the hardness enhancement was caused by the diffusion of nitrogen atoms into the polymer.
NASA Astrophysics Data System (ADS)
Zand, Ramtin; DeMara, Ronald F.
2017-12-01
In this paper, we have developed a radiation-hardened non-volatile lookup table (LUT) circuit utilizing spin Hall effect (SHE)-magnetic random access memory (MRAM) devices. The design is motivated by modeling the effect of radiation particles striking hybrid complementary metal oxide semiconductor/spin based circuits, and the resistive behavior of SHE-MRAM devices via established and precise physics equations. The models developed are leveraged in the SPICE circuit simulator to verify the functionality of the proposed design. The proposed hardening technique is based on using feedback transistors, as well as increasing the radiation capacity of the sensitive nodes. Simulation results show that our proposed LUT circuit can achieve multiple node upset (MNU) tolerance with more than 38% and 60% power-delay product improvement as well as 26% and 50% reduction in device count compared to the previous energy-efficient radiation-hardened LUT designs. Finally, we have performed a process variation analysis showing that the MNU immunity of our proposed circuit is realized at the cost of increased susceptibility to transistor and MRAM variations compared to an unprotected LUT design.
NASA Astrophysics Data System (ADS)
Bergner, F.; Pareige, C.; Hernández-Mayoral, M.; Malerba, L.; Heintze, C.
2014-05-01
An attempt is made to quantify the contributions of different types of defect-solute clusters to the total irradiation-induced yield stress increase in neutron-irradiated (300 °C, 0.6 dpa), industrial-purity Fe-Cr model alloys (target Cr contents of 2.5, 5, 9 and 12 at.% Cr). Former work based on the application of transmission electron microscopy, atom probe tomography, and small-angle neutron scattering revealed the formation of dislocation loops, NiSiPCr-enriched clusters and α‧-phase particles, which act as obstacles to dislocation glide. The values of the dimensionless obstacle strength are estimated in the framework of a three-feature dispersed-barrier hardening model. Special attention is paid to the effect of measuring errors, experimental details and model details on the estimates. The three families of obstacles and the hardening model are well capable of reproducing the observed yield stress increase as a function of Cr content, suggesting that the nanostructural features identified experimentally are the main, if not the only, causes of irradiation hardening in these model alloys.
Sensitivity of inelastic response to numerical integration of strain energy. [for cantilever beam
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1976-01-01
The exact solution to the quasi-static, inelastic response of a cantilever beam of rectangular cross section subjected to a bending moment at the tip is obtained. The material of the beam is assumed to be linearly elastic-linearly strain-hardening. This solution is then compared with three different numerical solutions of the same problem obtained by minimizing the total potential energy using Gaussian quadratures of two different orders and a Newton-Cotes scheme for integrating the strain energy of deformation. Significant differences between the exact dissipative strain energy and its numerical counterpart are emphasized. The consequence of this on the nonlinear transient responses of a beam with solid cross section and that of a thin-walled beam on elastic supports under impulsive loads are examined.
NASA Technical Reports Server (NTRS)
O'Brien, T. Kevin; Czabaj, Michael W.; Hinkley, Jeffrey A.; Tsampas, Spiros; Greenhalgh, Emile S.; McCombe, Gregory; Bond, Ian P.; Trask, Richard
2013-01-01
A study was undertaken to develop a prototype method for adding through-thickness hollow glass tubes infused with uncured resin and hardener in a carbon Z-pin through-thickness reinforcement field embedded in a composite laminate. Two types of tube insertion techniques were attempted in an effort to ensure the glass tubes survived the panel manufacturing process. A self-healing resin was chosen with a very low viscosity, two component, liquid epoxy resin system designed to be mixed at a 2-to-1 ratio of epoxy to hardener. IM7/8552 carbon epoxy double cantilever beam (DCB) specimens were cut from the hybrid Z-pin and glass tube reinforced panels and tested. In-situ injection of resin and hardener directly into glass tubes, in a staggered pattern to allow for 2-to-1 ratio mixing, resulted in partial healing of the fracture plane, but only if the injection was performed while the specimen was held at maximum load after initial fracture. Hence, there is some potential for healing delamination via resin and hardener delivered through a network of through-thickness glass tubes, but only if the tubes are connected to a reservoir where additional material may be injected as needed.
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
Adaptive control for accelerators
Eaton, Lawrie E.; Jachim, Stephen P.; Natter, Eckard F.
1991-01-01
An adaptive feedforward control loop is provided to stabilize accelerator beam loading of the radio frequency field in an accelerator cavity during successive pulses of the beam into the cavity. A digital signal processor enables an adaptive algorithm to generate a feedforward error correcting signal functionally determined by the feedback error obtained by a beam pulse loading the cavity after the previous correcting signal was applied to the cavity. Each cavity feedforward correcting signal is successively stored in the digital processor and modified by the feedback error resulting from its application to generate the next feedforward error correcting signal. A feedforward error correcting signal is generated by the digital processor in advance of the beam pulse to enable a composite correcting signal and the beam pulse to arrive concurrently at the cavity.
Excimer laser beam delivery systems for medical applications
NASA Astrophysics Data System (ADS)
Kubo, Uichi; Hashishin, Yuichi; Okada, Kazuyuki; Tanaka, Hiroyuki
1993-05-01
We have been doing the basic experiments of UV laser beams and biotissue interaction with both KrF and XeCl lasers. However, the conventional optical fiber can not be available for power UV beams. So we have been investigating about UV power beam delivery systems. These experiments carry on with the same elements doped quartz fibers and the hollow tube. The doped elements are OH ion, chlorine and fluorine. In our latest work, we have tried ArF excimer laser and biotissue interactions, and the beam delivery experiments. From our experimental results, we found that the ArF laser beam has high incision ability for hard biotissue. For example, in the case of the cow's bone incision, the incision depth by ArF laser was ca.15 times of KrF laser. Therefore, ArF laser would be expected to harden biotissue therapy as non-thermal method. However, its beam delivery is difficult to work in this time. We will develop ArF laser beam delivery systems.
Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki
2018-05-21
The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halls, B. R.; Roy, S.; Gord, J. R.
Flash x-ray radiography is used to capture quantitative, two-dimensional line-of-sight averaged, single-shot liquid distribution measurements in impinging jet sprays. The accuracy of utilizing broadband x-ray radiation from compact flash tube sources is investigated for a range of conditions by comparing the data with radiographic high-speed measurements from a narrowband, high-intensity synchrotron x-ray facility at the Advanced Photon Source (APS) of Argonne National Laboratory. The path length of the liquid jets is varied to evaluate the effects of energy dependent x-ray attenuation, also known as spectral beam hardening. The spatial liquid distributions from flash x-ray and synchrotron-based radiography are compared, alongmore » with spectral characteristics using Taylor’s hypothesis. The results indicate that quantitative, single-shot imaging of liquid distributions can be achieved using broadband x-ray sources with nanosecond temporal resolution. Practical considerations for optimizing the imaging system performance are discussed, including the coupled effects of x-ray bandwidth, contrast, sensitivity, spatial resolution, temporal resolution, and spectral beam hardening.« less
Hercules X-1: Spectral Variability of an X-Ray Pulsar in a Stellar Binary System. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Pravdo, S. H.
1976-01-01
A cosmic X-ray spectroscopy experiment onboard the Orbiting Solar Observatory 8 (OSO-8), observed Her x-1 continuously for approximately 8 days. Spectral-temporal correlations of the X-ray emission were obtained. The major results concern observations of: (1) iron band emission, (2) spectral hardening (increase in effective x-ray temperature) within the X-ray pulse, and (3) a transition from an X-ray low state to a high state. The spectrum obtained prior to the high state can be interpreted as reflected emission from a hot coronal gas surrounding an accretion disk, which itself shields the primary X-ray source from the line of sight during the low state. The spectral hardening within the X-ray pulse was indicative of the beaming mechanism at the neutron star surface. The hardest spectrum by pulse phase was identified with the line of sight close to the Her x-1 magnetic dipole axis, and the X-ray pencil beam become harder with decreasing angle between the line of sight and the dipole axis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
TU-E-217BCD-04: Spectral Breast CT: Effect of Adaptive Filtration on CT Numbers, CT Noise, and CNR.
Silkwood, J; Matthews, K; Shikhaliev, P
2012-06-01
Photon counting spectral breast CT is feasible in part due to using an adaptive filter. An adaptive filter provides flat x-ray intensity profile and constant x-ray energy spectrum across detector surface, decreases required detector count rate, and eliminates beam hardening artifacts. However, the altered x-ray exposure profiles at the breast and detector surface may influence the distribution of CT noise, CT numbers, and contrast to noise ratio (CNR) across the CT images. The purpose of this work was to investigate these effects. Images of a CT phantom with and without adaptive filter were simulated at 60kVp, 90kVp, and 120kVp tube voltages and 660 mR total skin exposure. The CT phantom with water content had 14cm diameter, contrast elements representing adipose tissue and 2.5mg/cc iodine contrast located at 1cm, 3.5cm, and 6cm from center of the phantom. The CT numbers, CT noise, and CNR were measured at multiple locations for several filter/exposure combinations: (1)without adaptive filter for 660mR skin exposure; (2)with adaptive filter for 660mR skin exposure along central axis (mean skin exposure across the breast was <660mR); and (3)with adaptive filter for scaled exposure (mean skin exposure was 660mR). Beam hardening (cupping) artifacts had 47HU magnitude without adaptive filter but were eliminated with adaptive filter. CNR of contrast elements was comparable for (1) and (2) over central parts but was higher by 20-30% for (1) near the edge of the phantom. CNR was higher by 20-30% in (3) as compared to (2) over central parts and comparable near the edges. The adaptive filter provided: uniform distribution of CT noise, CNR, and CT numbers across CT images; comparable or better CNR with no dose penalty to the breast; and eliminated beam hardening artifacts. © 2012 American Association of Physicists in Medicine.
A simple and robust method for artifacts correction on X-ray microtomography images
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Irina, Bayuk; Kirill, Gerke
2017-04-01
X-ray microtomography images of rock material often have some kinds of distortion due to different reasons such as X-ray attenuation, beam hardening, irregularity of distribution of liquid/solid phases. Several kinds of distortion can arise from further image processing and stitching of images from different measurements. Beam-hardening is a well-known and studied distortion which is relative easy to be described, fitted and corrected using a number of equations. However, this is not the case for other grey scale intensity distortions. Shading by irregularity of distribution of liquid phases, incorrect scanner operating/parameters choosing, as well as numerous artefacts from mathematical reconstructions from projections, including stitching from separate scans cannot be described using single mathematical model. To correct grey scale intensities on large 3D images we developed a package Traditional method for removing the beam hardening [1] has been modified in order to find the center of distortion. The main contribution of this work is in development of a method for arbitrary image correction. This method is based on fitting the distortion by Bezier curve using image histogram. The distortion along the image is represented by a number of Bezier curves and one base line that characterizes the natural distribution of gray value along the image. All of these curves are set manually by the operator. We have tested our approaches on different X-ray microtomography images of porous media. Arbitrary correction removes all principal distortion. After correction the images has been binarized with subsequent pore-network extracted. Equal distribution of pore-network elements along the image was the criteria to verify the proposed technique to correct grey scale intensities. [1] Iassonov, P. and Tuller, M., 2010. Application of segmentation for correction of intensity bias in X-ray computed tomography images. Vadose Zone Journal, 9(1), pp.187-191.
NASA Astrophysics Data System (ADS)
Louis, Alfred K.
2016-11-01
We derive unified inversion formulae for the cone beam transform similar to the Radon transform. Reinterpreting Grangeat’s formula we find a relation between the Radon transform of the gradient of the searched-for function and a quantity computable from cone beam data. This gives a uniqueness result for the cone beam transform of compactly supported functions under much weaker assumptions than the Tuy-Kirillov condition. Furthermore this relation leads to an exact formula for the direct calculation of derivatives of the density distribution; but here, similar to the classical Radon transform, complete Radon data are needed, hence the Tuy-Kirillov condition has to be imposed. Numerical experiments reported in Hahn B N et al (2013 Meas. Sci. Technol. 24 125601) indicate that these calculations are less corrupted by beam-hardening noise. Finally, we present flat detector versions for these results, which are mathematically less attractive but important for applications.
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.
NASA Astrophysics Data System (ADS)
Kunieda, Minoru; Shimizu, Kosuke; Eguchi, Teruyuki; Ueda, Naoshi; Nakamura, Hikaru
This paper presents the fundamental properties of Ultra High Performance-Strain Hardening Cementitious Composites (UHP-SHCC), which were depeloped for repair applications. In particular, mechanical properties such as tensile response, shrinkage and bond strength were investigated experimentally. Protective performance of the material such as air permeability, water permeability and penetration of chloride ion was also confirmed comparing to that of ordinary concrete. This paper also introduces the usage of the material in repair of concrete st ructures. Laboratory tests concerining the deterioration induced by corrosion were conducted. The UHP-SHCC that coverd the RC beam resisted not only crack opening along the rebar due to corrosion but also crack opening due to loading tests.
Interceptive Beam Diagnostics - Signal Creation and Materials Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plum, Michael; Spallation Neutron Source, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN
2004-11-10
The focus of this tutorial will be on interceptive beam diagnostics such as wire scanners, screens, and harps. We will start with an overview of the various ways beams interact with materials to create signals useful for beam diagnostics systems. We will then discuss the errors in a harp or wire scanner profile measurement caused by errors in wire position, number of samples, and signal errors. Finally we will apply our results to two design examples-the SNS wire scanner system and the SNS target harp.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Absolute calibration of optical flats
Sommargren, Gary E.
2005-04-05
The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
NASA Astrophysics Data System (ADS)
Erofeev, M. V.; Shulepov, M. A.; Ivanov, Yu. F.; Oskomov, K. V.; Tarasenko, V. F.
2016-03-01
Effect of volume discharge plasma initiated by an avalanche electron beam on the composition, structure, and properties of the surface steel layer is investigated. Voltage pulses with incident wave amplitude up to 30 kV, full width at half maximum of about 4 ns, and wave front of about 2.5 ns were applied to the gap with an inhomogeneous electric field. Changes indicating the hardening effect of the volume discharge initiated by an avalanche electron beam are revealed in St3-grade steel specimens treated by the discharge of this type.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadava, G; Imai, Y; Hsieh, J
2014-06-15
Purpose: Quantitative accuracy of Iodine Hounsfield Unit (HU) in conventional single-kVp scanning is susceptible to beam-hardening effect. Dual-energy CT has unique capabilities of quantification using monochromatic CT images, but this scanning mode requires the availability of the state-of-the-art CT scanner and, therefore, is limited in routine clinical practice. Purpose of this work was to develop a beam-hardening-correction (BHC) for single-kVp CT that can linearize Iodine projections at any nominal energy, apply this approach to study Iodine response with respect to keV, and compare with dual-energy based monochromatic images obtained from material-decomposition using 80kVp and 140kVp. Methods: Tissue characterization phantoms (Gammexmore » Inc.), containing solid-Iodine inserts of different concentrations, were scanned using GE multi-slice CT scanner at 80, 100, 120, and 140 kVp. A model-based BHC algorithm was developed where Iodine was estimated using re-projection of image volume and corrected through an iterative process. In the correction, the re-projected Iodine was linearized using a polynomial mapping between monochromatic path-lengths at various nominal energies (40 to 140 keV) and physically modeled polychromatic path-lengths. The beam-hardening-corrected 80kVp and 140kVp images (linearized approximately at effective energy of the beam) were used for dual-energy material-decomposition in Water-Iodine basis-pair followed by generation of monochromatic images. Characterization of Iodine HU and noise in the images obtained from singlekVp with BHC at various nominal keV, and corresponding dual-energy monochromatic images, was carried out. Results: Iodine HU vs. keV response from single-kVp with BHC and dual-energy monochromatic images were found to be very similar, indicating that single-kVp data may be used to create material specific monochromatic equivalent using modelbased projection linearization. Conclusion: This approach may enable quantification of Iodine contrast enhancement and potential reduction in injected contrast without using dual-energy scanning. However, in general, dual-energy scanning has unique value in material characterization and quantification, and its value cannot be discounted. GE Healthcare Employee.« less
Idris A, Elbakri; Fessler, Jeffrey A
2003-08-07
This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.
Tsai, I-Chen; Lin, Yung-Kai; Chang, Yen; Fu, Yun-Ching; Wang, Chung-Chi; Hsieh, Shih-Rong; Wei, Hao-Ji; Tsai, Hung-Wen; Jan, Sheng-Ling; Wang, Kuo-Yang; Chen, Min-Chi; Chen, Clayton Chi-Chang
2009-04-01
The purpose was to compare the findings of multi-detector computed tomography (MDCT) in prosthetic valve disorders using the operative findings as a gold standard. In a 3-year period, we prospectively enrolled 25 patients with 31 prosthetic heart valves. MDCT and transthoracic echocardiography (TTE) were done to evaluate pannus formation, prosthetic valve dysfunction, suture loosening (paravalvular leak) and pseudoaneurysm formation. Patients indicated for surgery received an operation within 1 week. The MDCT findings were compared with the operative findings. One patient with a Björk-Shiley valve could not be evaluated by MDCT due to a severe beam-hardening artifact; thus, the exclusion rate for MDCT was 3.2% (1/31). Prosthetic valve disorders were suspected in 12 patients by either MDCT or TTE. Six patients received an operation that included three redo aortic valve replacements, two redo mitral replacements and one Amplatzer ductal occluder occlusion of a mitral paravalvular leak. The concordance of MDCT for diagnosing and localizing prosthetic valve disorders and the surgical findings was 100%. Except for images impaired by severe beam-hardening artifacts, MDCT provides excellent delineation of prosthetic valve disorders.
Technical Note: On maximizing Cherenkov emissions from medical linear accelerators.
Shrock, Zachary; Yoon, Suk W; Gunasingha, Rathnayaka; Oldham, Mark; Adamson, Justus
2018-04-19
Cherenkov light during MV radiotherapy has recently found imaging and therapeutic applications but is challenged by relatively low fluence. Our purpose is to investigate the feasibility of increasing Cherenkov light production during MV radiotherapy by increasing photon energy and applying specialized beam-hardening filtration. GAMOS 5.0.0, a GEANT4-based framework for Monte Carlo simulations, was used to model standard clinical linear accelerator primary photon beams. The photon source was incident upon a 17.8 cm 3 cubic water phantom with a 94 cm source to surface distance. Dose and Cherenkov production was determined at depths of 3-9 cm. Filtration was simulated 15 cm below the photon beam source. Filter materials included aluminum, iron, and copper with thicknesses of 2-20 cm. Histories used depended on the level of attenuation from the filter, ranging from 100 million to 2 billion. Comparing average dose per history also allowed for evaluation of dose-rate reduction for different filters. Overall, increasing photon beam energy is more effective at improving Cherenkov production per unit dose than is filtration, with a standard 18 MV beam yielding 3.3-4.0× more photons than 6 MV. Introducing an aluminum filter into an unfiltered 2400 cGy/min 10 MV beam increases the Cherenkov production by 1.6-1.7×, while maintaining a clinical dose rate of 300 cGy/min, compared to increases of ~1.5× for iron and copper. Aluminum was also more effective than the standard flattening filter, with the increase over the unfiltered beam being 1.4-1.5× (maintaining 600 cGy/min dose rate) vs 1.3-1.4× for the standard flattening filter. Applying a 10 cm aluminum filter to a standard 18 MV, photon beam increased the Cherenkov production per unit dose to 3.9-4.3× beyond that of 6 MV (vs 3.3-4.0× for 18 MV with no aluminum filter). Through a combination of increasing photon energy and applying specialized beam-hardening filtration, the amount of Cherenkov photons per unit radiotherapy dose can be increased substantially. © 2018 American Association of Physicists in Medicine.
Chen, Dongmei; Zhu, Shouping; Cao, Xu; Zhao, Fengjun; Liang, Jimin
2015-01-01
X-ray luminescence computed tomography (XLCT) has become a promising imaging technology for biological application based on phosphor nanoparticles. There are mainly three kinds of XLCT imaging systems: pencil beam XLCT, narrow beam XLCT and cone beam XLCT. Narrow beam XLCT can be regarded as a balance between the pencil beam mode and the cone-beam mode in terms of imaging efficiency and image quality. The collimated X-ray beams are assumed to be parallel ones in the traditional narrow beam XLCT. However, we observe that the cone beam X-rays are collimated into X-ray beams with fan-shaped broadening instead of parallel ones in our prototype narrow beam XLCT. Hence we incorporate the distribution of the X-ray beams in the physical model and collected the optical data from only two perpendicular directions to further speed up the scanning time. Meanwhile we propose a depth related adaptive regularized split Bregman (DARSB) method in reconstruction. The simulation experiments show that the proposed physical model and method can achieve better results in the location error, dice coefficient, mean square error and the intensity error than the traditional split Bregman method and validate the feasibility of method. The phantom experiment can obtain the location error less than 1.1 mm and validate that the incorporation of fan-shaped X-ray beams in our model can achieve better results than the parallel X-rays. PMID:26203388
Chen, Benyong; Cheng, Liang; Yan, Liping; Zhang, Enzheng; Lou, Yingtian
2017-03-01
The laser beam drift seriously influences the accuracy of straightness or displacement measurement in laser interferometers, especially for the long travel measurement. To solve this problem, a heterodyne straightness and displacement measuring interferometer with laser beam drift compensation is proposed. In this interferometer, the simultaneous measurement of straightness error and displacement is realized by using heterodyne interferometry, and the laser beam drift is determined to compensate the measurement results of straightness error and displacement in real time. The optical configuration of the interferometer is designed. The principle of the simultaneous measurement of straightness, displacement, and laser beam drift is depicted and analyzed in detail. And the compensation of the laser beam drift for the straightness error and displacement is presented. Several experiments were performed to verify the feasibility of the interferometer and the effectiveness of the laser beam drift compensation. The experiments of laser beam stability show that the position stability of the laser beam spot can be improved by more than 50% after compensation. The measurement and compensation experiments of straightness error and displacement by testing a linear stage at different distances show that the straightness and displacement obtained from the interferometer are in agreement with those obtained from a compared interferometer and the measured stage. These demonstrate that the merits of this interferometer are not only eliminating the influence of laser beam drift on the measurement accuracy but also having the abilities of simultaneous measurement of straightness error and displacement as well as being suitable for long-travel linear stage metrology.
NASA Astrophysics Data System (ADS)
Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark
2017-03-01
A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.
NASA Technical Reports Server (NTRS)
Berman, P. A.
1973-01-01
In order to improve reliability and the useful lifetime of solar cell arrays for space use, a program was undertaken to develop radiation-hardened lithium-doped silicon solar cells. These cells were shown to be significantly more resistant to degradation by ionized particles than the presently used n-p nonlithium-doped silicon solar cells. The results of various analyses performed to develop a more complete understanding of the physics of the interaction among lithium, silicon, oxygen, and radiation-induced defects are presented. A discussion is given of those portions of the previous model of radiation damage annealing which were found to be in error and those portions which were upheld by these extensive investigations.
A stoichiometric calibration method for dual energy computed tomography
NASA Astrophysics Data System (ADS)
Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo
2014-04-01
The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic reconstruction algorithm (filtered back projection). With a more advanced method (sinogram affirmed iterative technique), the values become 1.0 mm, 0.5 mm and 0.4 mm for protons, helium and carbon ions, respectively. These results allow one to conclude that the present adaptation of the stoichiometric calibration yields a highly accurate method for characterizing tissue with DECT for ion beam therapy and potentially for photon beam therapy.
Beam masking to reduce cyclic error in beam launcher of interferometer
NASA Technical Reports Server (NTRS)
Ames, Lawrence L. (Inventor); Bell, Raymond Mark (Inventor); Dutta, Kalyan (Inventor)
2005-01-01
Embodiments of the present invention are directed to reducing cyclic error in the beam launcher of an interferometer. In one embodiment, an interferometry apparatus comprises a reference beam directed along a reference path, and a measurement beam spatially separated from the reference beam and being directed along a measurement path contacting a measurement object. The reference beam and the measurement beam have a single frequency. At least a portion of the reference beam and at least a portion of the measurement beam overlapping along a common path. One or more masks are disposed in the common path or in the reference path and the measurement path to spatially isolate the reference beam and the measurement beam from one another.
SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kathuria, K; Siebers, J
2014-06-01
Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and aremore » as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, D; Dyer, B; Kumaran Nair, C
Purpose: The Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area, linac-mounted ion chamber used to monitor photon fluence during patient treatment. Our previous work evaluated the change of the ion chamber’s response to deviations from static 1×1 cm2 and 10×10 cm2 photon beams and other characteristics integral to use in external beam detection. The aim of this work is to simulate two external beam radiation delivery errors, quantify the detection of simulated errors and evaluate the reduction in patient harm resulting from detection. Methods: Two well documented radiation oncology delivery errors were selected formore » simulation. The first error was recreated by modifying a wedged whole breast treatment, removing the physical wedge and calculating the planned dose with Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI). The second error was recreated by modifying a static-gantry IMRT pharyngeal tonsil plan to be delivered in 3 unmodulated fractions. A radiation oncologist evaluated the dose for simulated errors and predicted morbidity and mortality commiserate with the original reported toxicity, indicating that reported errors were approximately simulated. The ion chamber signal of unmodified treatments was compared to the simulated error signal and evaluated in Pinnacle TPS again with radiation oncologist prediction of simulated patient harm. Results: Previous work established that transmission detector system measurements are stable within 0.5% standard deviation (SD). Errors causing signal change greater than 20 SD (10%) were considered detected. The whole breast and pharyngeal tonsil IMRT simulated error increased signal by 215% and 969%, respectively, indicating error detection after the first fraction and IMRT segment, respectively. Conclusion: The transmission detector system demonstrated utility in detecting clinically significant errors and reducing patient toxicity/harm in simulated external beam delivery. Future work will evaluate detection of other smaller magnitude delivery errors.« less
Gradient boride layers formed by diffusion carburizing and laser boriding
NASA Astrophysics Data System (ADS)
Kulka, M.; Makuch, N.; Dziarski, P.; Mikołajczak, D.; Przestacki, D.
2015-04-01
Laser boriding, instead of diffusion boriding, was proposed to formation of gradient borocarburized layers. The microstructure and properties of these layers were compared to those-obtained after typical diffusion borocarburizing. First method of treatment consists in diffusion carburizing and laser boriding only. In microstructure three zones are present: laser borided zone, hardened carburized zone and carburized layer without heat treatment. However, the violent decrease in the microhardness was observed below the laser borided zone. Additionally, these layers were characterized by a changeable value of mass wear intensity factor thus by a changeable abrasive wear resistance. Although at the beginning of friction the very low values of mass wear intensity factor Imw were obtained, these values increased during the next stages of friction. It can be caused by the fluctuations in the microhardness of the hardened carburized zone (HAZ). The use of through hardening after carburizing and laser boriding eliminated these fluctuations. Two zones characterized the microstructure of this layer: laser borided zone and hardened carburized zone. Mass wear intensity factor obtained a constant value for this layer and was comparable to that-obtained in case of diffusion borocarburizing and through hardening. Therefore, the diffusion boriding could be replaced by the laser boriding, when the high abrasive wear resistance is required. However, the possibilities of application of laser boriding instead of diffusion process were limited. In case of elements, which needed high fatigue strength, the substitution of diffusion boriding by laser boriding was not advisable. The surface cracks formed during laser re-melting were the reason for relatively quickly first fatigue crack. The preheating of the laser treated surface before laser beam action would prevent the surface cracks and cause the improved fatigue strength. Although the cohesion of laser borided carburized layer was sufficient, the diffusion borocarburized layer showed a better cohesion.
Stitching-error reduction in gratings by shot-shifted electron-beam lithography
NASA Technical Reports Server (NTRS)
Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.
2001-01-01
Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.
Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang
2015-01-01
A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268
Single event upset vulnerability of selected 4K and 16K CMOS static RAM's
NASA Technical Reports Server (NTRS)
Kolasinski, W. A.; Koga, R.; Blake, J. B.; Brucker, G.; Pandya, P.; Petersen, E.; Price, W.
1982-01-01
Upset thresholds for bulk CMOS and CMOS/SOS RAMS were deduced after bombardment of the devices with 140 MeV Kr, 160 MeV Ar, and 33 MeV O beams in a cyclotron. The trials were performed to test prototype devices intended for space applications, to relate feature size to the critical upset charge, and to check the validity of computer simulation models. The tests were run on 4 and 1 K memory cells with 6 transistors, in either hardened or unhardened configurations. The upset cross sections were calculated to determine the critical charge for upset from the soft errors observed in the irradiated cells. Computer simulations of the critical charge were found to deviate from the experimentally observed variation of the critical charge as the square of the feature size. Modeled values of series resistors decoupling the inverter pairs of memory cells showed that above some minimum resistance value a small increase in resistance produces a large increase in the critical charge, which the experimental data showed to be of questionable validity unless the value is made dependent on the maximum allowed read-write time.
NASA Astrophysics Data System (ADS)
He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo
2010-11-01
For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.
Sub-nanometer periodic nonlinearity error in absolute distance interferometers
NASA Astrophysics Data System (ADS)
Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang
2015-05-01
Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.
Multiple-mode nonlinear free and forced vibrations of beams using finite element method
NASA Technical Reports Server (NTRS)
Mei, Chuh; Decha-Umphai, Kamolphan
1987-01-01
Multiple-mode nonlinear free and forced vibration of a beam is analyzed by the finite element method. The geometric nonlinearity is investigated. Inplane displacement and inertia (IDI) are also considered in the formulation. Harmonic force matrix is derived and explained. Nonlinear free vibration can be simply treated as a special case of the general forced vibration by setting the harmonic force matrix equal to zero. The effect of the higher modes is more pronouced for the clamped supported beam than the simply supported one. Beams without IDI yield more effect of the higher modes than the one with IDI. The effects of IDI are to reduce nonlinearity. For beams with end supports restrained from axial movement (immovable cases), only the hardening type nonlinearity is observed. However, beams of small slenderness ratio (L/R = 20) with movable end supports, the softening type nonlinearity is found. The concentrated force case yields a more severe response than the uniformly distributed force case. Finite element results are in good agreement with the solution of simple elliptic response, harmonic balance method, and Runge-Kutte method and experiment.
Reconstruction of radial thermal conductivity depth profile in case hardened steel rods
NASA Astrophysics Data System (ADS)
Celorrio, Ricardo; Mendioroz, Arantza; Apiñaniz, Estibaliz; Salazar, Agustín; Wang, Chinhua; Mandelis, Andreas
2009-04-01
In this work the surface thermal-wave field (ac temperature) of a solid cylinder illuminated by a modulated light beam is calculated first in two cases: a multilayered cylinder and a cylinder the radial thermal conductivity of which varies continuously. It is demonstrated numerically that, using a few layers of different thicknesses, the surface thermal-wave field of a cylindrical sample with continuously varying radial thermal conductivity can be calculated with high accuracy. Next, an inverse procedure based on the multilayered model is used to reconstruct the radial thermal conductivity profile of hardened C1018 steel rods, the surface temperature of which was measured by photothermal radiometry. The reconstructed thermal conductivity depth profile has a similar shape to those found for flat samples of this material and shows a qualitative anticorrelation with the hardness depth profile.
NASA Astrophysics Data System (ADS)
Aróztegui, Juan J.; Urcola, José J.; Fuentes, Manuel
1989-09-01
Commercial electric arc melted low-carbon steels, provided as I beams, were characterized both microstructurally and mechanically in the as-rolled, copper precipitation, and plastically pre-deformed conditions. Inclusion size distribution, ferrite grain size, pearlite volume fraction, precipitated volume fraction of copper, and size distribution of these precipitates were deter-mined by conventional quantitative optical and electron metallographic techniques. From the tensile tests conducted at a strain rate of 10-3 s-1 and impact Charpy V-notched tests carried out, stress/strain curves, yield stress, and impact-transition temperature were obtained. The spe-cific fractographic features of the fracture surfaces also were quantitatively characterized. The increases in yield stress and transition temperature experienced upon either aging or work hard-ening were related through empirical relationships. These dependences were analyzed semi-quantitatively by combining microscopic and macroscopic fracture criteria based on measured fundamental properties (fracture stress and yield stress) and observed fractographic parameters (crack nucleation distance and nuclei size). The rationale developed from these fracture criteria allows the semiquantitative prediction of the temperature transition shifts produced upon aging and work hardening. The values obtained are of the right order of magnitude.
SU-G-TeP4-12: Individual Beam QA for a Robotic Radiosurgery System Using a Scintillator Cone
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuinness, C; Descovich, M; Sudhyadhom, A
2016-06-15
Purpose: The targeting accuracy of the Cyberknife system is measured by end-to-end tests delivering multiple isocentric beams to a point in space. While the targeting accuracy of two representative beams can be determined by a Winston-Lutz-type test, no test is available today to determine the targeting accuracy of each clinical beam. We used a scintillator cone to measure the accuracy of each individual beam. Methods: The XRV-124 from Logos Systems Int’l is a scintillator cone with an imaging system that is able to measure individual beam vectors and a resulting error between planned and measured beam coordinates. We measured themore » targeting accuracy of isocentric and non-isocentric beams for a number of test cases using the Iris and the fixed collimator. The average difference between plan and measured beam position was 0.8–1.2mm across the collimator sizes and plans considered here. The max error for a single beam was 2.5mm for the isocentric plans, and 1.67mm for the non-isocentric plans. The standard deviation of the differences was 0.5mm or less. Conclusion: The CyberKnife System is specified to have an overall targeting accuracy for static targets of less than 0.95mm. In E2E tests using the XRV124 system we measure average beam accuracy between 0.8 to 1.23mm, with maximum of 2.5mm. We plan to investigate correlations between beam position error and robot position, and to quantify the effect of beam position errors on patient specific plans. Martina Descovich has received research support and speaker honoraria from Accuray.« less
Use Of Lasers In Seam Welding Of Engine Parts For Cars
NASA Astrophysics Data System (ADS)
Luttke, A.
1986-11-01
The decision in favour of active research into laser technology was taken in our company in 1978. In the following years we started with the setting-up of a laser laboratory charged with the task of performing basic manufacturing technology experiments in order to examine the ap-plications of laser technology for cutting, welding, hardening, remelting and secondary alloys. The first laboratory-laser - a 2,5 kW fast axial flow CO2 laser - is connected with a CNC-controlled workpiece manipulation unit, which is designed in such a way that workpieces from the smallest component of a car gearbox up to crankcases for commercial vehicles can be manipulated at speeds considered theoretically feasible for laser machining. The use of the laser beam for cutting, hardening and welding tasks has been under investigation in our company, in this laboratory for some 6 years. Laser cutting is now no longer a question of development, but is instead standard practice and is already used in various sec-tions of our production division for pilot-series manufacturing and for small batches. Laser hardening has, in our opinion, great possibilities for tasks which, for distortion and accessibility reasons, cannot be satisfactorily performed using present-day processes, for instance induction hardening. However, a great deal of development work is still necessary before economically reasonable and quality-assured production installation can be undertaken. Laser-welding is now used in series-production in our company for two engine components. More details are given below.
Worldwide Ocean Optics Database (WOOD)
2001-09-30
user can obtain values computed from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error ...from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for...properties, including diffuse attenuation, beam attenuation, and scattering. The database shall be easy to use, Internet accessible, and frequently updated
A mathematical approach to beam matching
Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N
2013-01-01
Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874
Resistance welding graphite-fiber composites
NASA Technical Reports Server (NTRS)
Lamoureux, R. T.
1980-01-01
High-strength joints are welded in seconds in carbon-reinfored thermoplastic beams. Resistance-welding electrode applies heat and pressure to joint and is spring-loaded to follow softening material to maintain contact; it also holds parts together for cooling and hardening. Both transverse and longitudinal configurations can be welded. Adhesive bonding and encapsulation are more time consuming methods and introduce additional material into joint, while ultrasonic heating can damage graphite fibers in composite.
NASA Astrophysics Data System (ADS)
Krasnov, P. S.; Metel, A. S.; Nay, H. A.
2017-05-01
Before the synthesis of superhard coating, the product surface is hardened by means of plasma nitriding, which prevents the surface deformations and the coating brittle rupture. The product heating by ions accelerated from plasma by applied to the product bias voltage leads to overheating and blunting of the product sharp edges. To prevent the blunting, it is proposed to heat the products with a broad beam of fast nitrogen molecules. The beam injection into a working vacuum chamber results in filling of the chamber with quite homogeneous plasma suitable for nitriding. Immersion in the plasma of the electrode and heightening of its potential up to 50-100 V initiate a non-self-sustained glow discharge between the electrode and the chamber. It enhances the plasma density by an order of magnitude and reduces its spatial nonuniformity down to 5-10%. When a cutting tool is isolated from the chamber, it is bombarded by plasma ions with an energy corresponding to its floating potential, which is lower than the sputtering threshold. Hence, the sharp edges are sputtered only by fast nitrogen molecules with the same rate as other parts of the tool surface. This leads to sharpening of the cutting tools instead of blunting.
Pessis, Eric; Campagna, Raphaël; Sverzut, Jean-Michel; Bach, Fabienne; Rodallec, Mathieu; Guerini, Henri; Feydy, Antoine; Drapé, Jean-Luc
2013-01-01
With arthroplasty being increasingly used to relieve joint pain, imaging of patients with metal implants can represent a significant part of the clinical work load in the radiologist's daily practice. Computed tomography (CT) plays an important role in the postoperative evaluation of patients who are suspected of having metal prosthesis-related problems such as aseptic loosening, bone resorption or osteolysis, infection, dislocation, metal hardware failure, or periprosthetic bone fracture. Despite advances in detector technology and computer software, artifacts from metal implants can seriously degrade the quality of CT images, sometimes to the point of making them diagnostically unusable. Several factors may help reduce the number and severity of artifacts at multidetector CT, including decreasing the detector collimation and pitch, increasing the kilovolt peak and tube charge, and using appropriate reconstruction algorithms and section thickness. More recently, dual-energy CT has been proposed as a means of reducing beam-hardening artifacts. The use of dual-energy CT scanners allows the synthesis of virtual monochromatic spectral (VMS) images. Monochromatic images depict how the imaged object would look if the x-ray source produced x-ray photons at only a single energy level. For this reason, VMS imaging is expected to provide improved image quality by reducing beam-hardening artifacts.
Impact of neutron irradiation on mechanical performance of FeCrAl alloy laser-beam weldments
NASA Astrophysics Data System (ADS)
Gussev, M. N.; Cakmak, E.; Field, K. G.
2018-06-01
Oxidation-resistant iron-chromium-aluminum (FeCrAl) alloys demonstrate better performance in Loss-of-Coolant Accidents, compared with austenitic- and zirconium-based alloys. However, further deployment of FeCrAl-based materials requires detailed characterization of their performance under irradiation; moreover, since welding is one of the key operations in fabrication of light water reactor fuel cladding, FeCrAl alloy weldment performance and properties also should be determined prior to and after irradiation. Here, advanced C35M alloy (Fe-13%Cr-5%Al) and variants with aluminum (+2%) or titanium carbide (+1%) additions were characterized after neutron irradiation in Oak Ridge National Laboratory's High Flux Isotope Reactor at 1.8-1.9 dpa in a temperature range of 195-559 °C. Specimen sets included as-received (AR) materials and specimens after controlled laser-beam welding. Tensile tests with digital image correlation (DIC), scanning electron microscopy-electron back scatter diffraction analysis, fractography, and x-ray tomography analysis were performed. DIC allowed for investigating local yield stress in the weldments, deformation hardening behavior, and plastic anisotropy. Both AR and welded material revealed a high degree of radiation-induced hardening for low-temperature irradiation; however, irradiation at high-temperatures (i.e., 559 °C) had little overall effect on the mechanical performance.
Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin
2015-01-01
Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.
Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong
2017-06-01
The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
Forward scattering in two-beam laser interferometry
NASA Astrophysics Data System (ADS)
Mana, G.; Massa, E.; Sasso, C. P.
2018-04-01
A fractional error as large as 25 pm mm-1 at the zero optical-path difference has been observed in an optical interferometer measuring the displacement of an x-ray interferometer used to determine the lattice parameter of silicon. Detailed investigations have brought to light that the error was caused by light forward-scattered from the beam feeding the interferometer. This paper reports on the impact of forward-scattered light on the accuracy of two-beam optical interferometry applied to length metrology, and supplies a model capable of explaining the observed error.
Smith, Peter D [Santa Fe, NM; Claytor, Thomas N [White Rock, NM; Berry, Phillip C [Albuquerque, NM; Hills, Charles R [Los Alamos, NM
2010-10-12
An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.
NASA Astrophysics Data System (ADS)
Campo, Adriaan; Dudzik, Grzegorz; Apostolakis, Jason; Waz, Adam; Nauleau, Pierre; Abramski, Krzysztof; Dirckx, Joris; Konofagou, Elisa
2017-10-01
The aim of this work, was to compare pulse wave velocity (PWV) measurements using Laser Doppler vibrometry (LDV) and the more established ultrasound-based pulse wave imaging (PWI) in smooth vessels. Additionally, it was tested whether changes in phantom structure can be detected using LDV in vessels containing a local hardening of the vessel wall. Results from both methods showed good agreement illustrated by the non-parametric Spearman correlation analysis (Spearman-ρ = 1 and p< 0.05) and the Bland-Altman analysis (mean bias of -0.63 m/s and limits of agreement between -0.35 and -0.90 m/s). The PWV in soft phantoms as measured with LDV was 1.30±0.40 m/s and the PWV in stiff phantoms was 3.6±1.4 m/s. The PWV values in phantoms with inclusions were in between those of soft and stiff phantoms. However, using LDV, given the low number of measurement beams, the exact locations of inclusions could not be determined, and the PWV in the inclusions could not be measured. In conclusion, this study indicates that the PWV as measured with PWI is in good agreement with the PWV measured with LDV although the latter technique has lower spatial resolution, fewer markers and larger distances between beams. In further studies, more LDV beams will be used to allow detection of local changes in arterial wall dynamics due to e.g. small inclusions or local hardenings of the vessel wall.
NASA Astrophysics Data System (ADS)
Li, Lei; Hu, Jianhao
2010-12-01
Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.
Multimodal assessment of visual attention using the Bethesda Eye & Attention Measure (BEAM).
Ettenhofer, Mark L; Hershaw, Jamie N; Barry, David M
2016-01-01
Computerized cognitive tests measuring manual response time (RT) and errors are often used in the assessment of visual attention. Evidence suggests that saccadic RT and errors may also provide valuable information about attention. This study was conducted to examine a novel approach to multimodal assessment of visual attention incorporating concurrent measurements of saccadic eye movements and manual responses. A computerized cognitive task, the Bethesda Eye & Attention Measure (BEAM) v.34, was designed to evaluate key attention networks through concurrent measurement of saccadic and manual RT and inhibition errors. Results from a community sample of n = 54 adults were analyzed to examine effects of BEAM attention cues on manual and saccadic RT and inhibition errors, internal reliability of BEAM metrics, relationships between parallel saccadic and manual metrics, and relationships of BEAM metrics to demographic characteristics. Effects of BEAM attention cues (alerting, orienting, interference, gap, and no-go signals) were consistent with previous literature examining key attention processes. However, corresponding saccadic and manual measurements were weakly related to each other, and only manual measurements were related to estimated verbal intelligence or years of education. This study provides preliminary support for the feasibility of multimodal assessment of visual attention using the BEAM. Results suggest that BEAM saccadic and manual metrics provide divergent measurements. Additional research will be needed to obtain comprehensive normative data, to cross-validate BEAM measurements with other indicators of neural and cognitive function, and to evaluate the utility of these metrics within clinical populations of interest.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhou, Shihong; Ma, Jing; Tan, Liying; Shen, Tao
2013-08-01
CMOS is a good candidate tracking detector for satellite optical communications systems with outstanding feature of sub-window for the development of APS (Active Pixel Sensor) technology. For inter-satellite optical communications it is critical to estimate the direction of incident laser beam precisely by measuring the centroid position of incident beam spot. The presence of detector noise results in measurement error, which degrades the tracking performance of systems. In this research, the measurement error of CMOS is derived taking consideration of detector noise. It is shown that the measurement error depends on pixel noise, size of the tracking sub-window (pixels number), intensity of incident laser beam, relative size of beam spot. The influences of these factors are analyzed by numerical simulation. We hope the results obtained in this research will be helpful in the design of CMOS detector satellite optical communications systems.
Sommargren, Gary E.; Campbell, Eugene W.
2004-03-09
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second, measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
Sommargren, Gary E.; Campbell, Eugene W.
2005-06-21
To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.
Scatter correction for x-ray conebeam CT using one-dimensional primary modulation
NASA Astrophysics Data System (ADS)
Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca
2009-02-01
Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.
Half-value-layer increase owing to tungsten buildup in the x-ray tube: fact or fiction.
Stears, J G; Felmlee, J P; Gray, J E
1986-09-01
The half-value layer (HVL) of an x-ray beam is generally believed to increase with x-ray tube use. This increase in HVL has previously been attributed to the hardening of the x-ray beam as a result of a buildup of tungsten on the x-ray tube glass window. Radiographs and HVL measurements were obtained to determine the effect of tungsten deposited on the x-ray tube windows. This work, along with the HVL data from approximately 200 functioning x-ray tubes used for all applications that were monitored for more than 8 years, indicated there is no significant increase in HVL with diagnostic x-ray tube use.
Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin
2018-03-01
In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.
Control of secondary electrons from ion beam impact using a positive potential electrode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, T. P., E-mail: tpcrowley@xanthotechnologies.com; Demers, D. R.; Fimognari, P. J.
2016-11-15
Secondary electrons emitted when an ion beam impacts a detector can amplify the ion beam signal, but also introduce errors if electrons from one detector propagate to another. A potassium ion beam and a detector comprised of ten impact wires, four split-plates, and a pair of biased electrodes were used to demonstrate that a low-voltage, positive electrode can be used to maintain the beneficial amplification effect while greatly reducing the error introduced from the electrons traveling between detector elements.
NASA Astrophysics Data System (ADS)
Dhote, Sharvari; Zu, Jean; Zhu, Yang
2015-04-01
In this paper, a nonlinear wideband multi-mode piezoelectric vibration-based energy harvester (PVEH) is proposed based on a compliant orthoplanar spring (COPS), which has an advantage of providing multiple vibration modes at relatively low frequencies. The PVEH is made of a tri-leg COPS flexible structure, where three fixed-guided beams are capable of generating strong nonlinear oscillations under certain base excitation. A prototype harvester was fabricated and investigated through both finite-element analysis and experiments. The frequency response shows multiple resonance which corresponds to a hardening type of nonlinear resonance. By adding masses at different locations on the COPS structure, the first three vibration modes are brought close to each other, where the three hardening nonlinear resonances provide a wide bandwidth for the PVEH. The proposed PVEH has enhanced performance of the energy harvester in terms of a wide frequency bandwidth and a high-voltage output under base excitations.
NASA Astrophysics Data System (ADS)
Deka, A. J.; Bharathi, P.; Pandya, K.; Bandyopadhyay, M.; Bhuyan, M.; Yadav, R. K.; Tyagi, H.; Gahlaut, A.; Chakraborty, A.
2018-01-01
The Doppler Shift Spectroscopy (DSS) diagnostic is in the conceptual stage to estimate beam divergence, stripping losses, and beam uniformity of the 100 keV hydrogen Diagnostics Neutral Beam of International Thermonuclear Experimental Reactor. This DSS diagnostic is used to measure the above-mentioned parameters with an error of less than 10%. To aid the design calculations and to establish a methodology for estimation of the beam divergence, DSS measurements were carried out on the existing prototype ion source RF Operated Beam Source in India for Negative ion Research. Emissions of the fast-excited neutrals that are generated from the extracted negative ions were collected in the target tank, and the line broadening of these emissions were used for estimating beam divergence. The observed broadening is a convolution of broadenings due to beam divergence, collection optics, voltage ripple, beam focusing, and instrumental broadening. Hence, for estimating the beam divergence from the observed line broadening, a systematic line profile analysis was performed. To minimize the error in the divergence measurements, a study on error propagation in the beam divergence measurements was carried out and the error was estimated. The measurements of beam divergence were done at a constant RF power of 50 kW and a source pressure of 0.6 Pa by varying the extraction voltage from 4 kV to10 kV and the acceleration voltage from 10 kV to 15 kV. These measurements were then compared with the calorimetric divergence, and the results seemed to agree within 10%. A minimum beam divergence of ˜3° was obtained when the source was operated at an extraction voltage of ˜5 kV and at a ˜10 kV acceleration voltage, i.e., at a total applied voltage of 15 kV. This is in agreement with the values reported in experiments carried out on similar sources elsewhere.
Symmetry limit theory for cantilever beam-columns subjected to cyclic reversed bending
NASA Astrophysics Data System (ADS)
Uetani, K.; Nakamura, Tsuneyoshi
THE BEHAVIOR of a linear strain-hardening cantilever beam-column subjected to completely reversed plastic bending of a new idealized program under constant axial compression consists of three stages: a sequence of symmetric steady states, a subsequent sequence of asymmetric steady states and a divergent behavior involving unbounded growth of an anti-symmetric deflection mode. A new concept "symmetry limit" is introduced here as the smallest critical value of the tip-deflection amplitude at which transition from a symmetric steady state to an asymmetric steady state can occur in the response of a beam-column. A new theory is presented for predicting the symmetry limits. Although this transition phenomenon is phenomenologically and conceptually different from the branching phenomenon on an equilibrium path, it is shown that a symmetry limit may theoretically be regarded as a branching point on a "steady-state path" defined anew. The symmetry limit theory and the fundamental hypotheses are verified through numerical analysis of hysteretic responses of discretized beam-column models.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
Optical Testing of Retroreflectors for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Ohl, Raymond G.; Frey, Bradley J.; Stock, Joseph M.; McMann, Joseph C.; Zukowiski, Tmitri J.
2010-01-01
A laser tracker (LT) is an important coordinate metrology tool that uses laser interferometry to determine precise distances to objects, points, or surfaces defined by an optical reference, such as a retroreflector. A retroreflector is a precision optic consisting of three orthogonal faces that returns an incident laser beam nearly exactly parallel to the incident beam. Commercial retroreflectors are designed for operation at room temperature and are specified by the divergence, or beam deviation, of the returning laser beam, usually a few arcseconds or less. When a retroreflector goes to extreme cold (.35 K), however, it could be anticipated that the precision alignment between the three faces and the surface figure of each face would be compromised, resulting in wavefront errors and beam divergence, degrading the accuracy of the LT position determination. Controlled tests must be done beforehand to determine survivability and these LT coordinate errors. Since conventional interferometer systems and laser trackers do not operate in vacuum or at cold temperatures, measurements must be done through a vacuum window, and care must be taken to ensure window-induced errors are negligible, or can be subtracted out. Retroreflector holders must be carefully designed to minimize thermally induced stresses. Changes in the path length and refractive index of the retroreflector have to be considered. Cryogenic vacuum testing was done on commercial solid glass retroreflectors for use on cryogenic metrology tasks. The capabilities to measure wavefront errors, measure beam deviations, and acquire laser tracker coordinate data were demonstrated. Measurable but relatively small increases in beam deviation were shown, and further tests are planned to make an accurate determination of coordinate errors.
Single event upset in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taber, A.; Normand, E.
1993-04-01
Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.
Design, fabrication, testing, and delivery of improved beam steering devices
NASA Technical Reports Server (NTRS)
1973-01-01
The development, manufacture, and testing of an optical steerer intended for use in spaceborne optical radar systems are described. Included are design principles and design modifications made to harden the device against launch and space environments, the quality program and procedures developed to insure consistent product quality throughout the manufacturing phase, and engineering qualification model testing and evaluation. The delivered hardware design is considered conditionally qualified pending action on further recommended design modifications.
NASA Astrophysics Data System (ADS)
Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.; Silvestre, E.
2017-09-01
The roll levelling is a flattening process used to remove the residual stresses and imperfections of metal strips by means of plastic deformations. During the process, the metal sheet is subjected to cyclic tension-compression deformations leading to a flat product. The process is especially important to avoid final geometrical errors when coils are cold formed or when thick plates are cut by laser. In the last years, and due to the appearance of high strength materials such as Ultra High Strength Steels, machine design engineers are demanding reliable tools for the dimensioning of the levelling facilities. Like in other metal forming fields, finite element analysis seems to be the most widely used solution to understand the occurring phenomena and to calculate the processing loads. In this paper, the roll levelling process of the third generation Fortiform 1050 steel is numerically analysed. The process has been studied using the MSC MARC software and two different material laws. A pure isotropic hardening law has been used and set as the baseline study. In the second part, tension-compression tests have been carried out to analyse the cyclic behaviour of the steel. With the obtained data, a new material model using a combined isotropic-kinematic hardening formulation has been fitted. Finally, the influence of the material model in the numerical results has been analysed by comparing a pure isotropic model and the later combined mixed hardening model.
Keystroke Dynamics-Based Credential Hardening Systems
NASA Astrophysics Data System (ADS)
Bartlow, Nick; Cukic, Bojan
abstract Keystroke dynamics are becoming a well-known method for strengthening username- and password-based credential sets. The familiarity and ease of use of these traditional authentication schemes combined with the increased trustworthiness associated with biometrics makes them prime candidates for application in many web-based scenarios. Our keystroke dynamics system uses Breiman’s random forests algorithm to classify keystroke input sequences as genuine or imposter. The system is capable of operating at various points on a traditional ROC curve depending on application-specific security needs. As a username/password authentication scheme, our approach decreases the system penetration rate associated with compromised passwords up to 99.15%. Beyond presenting results demonstrating the credential hardening effect of our scheme, we look into the notion that a user’s familiarity to components of a credential set can non-trivially impact error rates.
Quantum Error Correction with a Globally-Coupled Array of Neutral Atom Qubits
2013-02-01
magneto - optical trap ) located at the center of the science cell. Fluorescence...Bottle beam trap GBA Gaussian beam array EMCCD electron multiplying charge coupled device microsec. microsecond MOT Magneto - optical trap QEC quantum error correction qubit quantum bit ...developed and implemented an array of neutral atom qubits in optical traps for studies of quantum error correction. At the end of the three year
Dual energy approach for cone beam artifacts correction
NASA Astrophysics Data System (ADS)
Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk
2017-03-01
Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhtiari, M; Schmitt, J
2014-06-01
Purpose: Cylindrical and rectangular scanning water tanks are examined with different scanning speeds to investigate the TG-106 criteria and the errors induced in the measurements. Methods: Beam profiles were measured in a depth of R50 for a low-energy electron beam (6 MeV) using rectangular and cylindrical tanks. The speeds of the measurements (arm movement) were varied in different profile measurements. Each profile was measured with a certain speed to obtain the average and standard deviation as a parameter for investigating the reproducibility and errors. Results: At arm speeds of ∼0.8 mm/s the errors were as large as 2% and 1%more » with rectangular and cylindrical tanks, respectively. The errors for electron beams and for photon beams in other depths were within the TG-106 criteria of 1% for both tank shapes. Conclusion: The measurements of low-energy electron beams in a depth of R50, as an extreme case scenario, are sensitive to the speed of the measurement arms for both rectangular and cylindrical tanks. The measurements in other depths, for electron beams and photon beams, with arm speeds of less than 1 cm/s are within the TG-106 criteria. An arm speed of 5 mm/s appeared to be optimal for fast and accurate measurements for both cylindrical and rectangular tanks.« less
WE-FG-207B-06: Plaque Composition Measurement with Dual Energy Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Ding, H; Malkasian, S
Purpose: To investigate the feasibility of characterizing arterial plaque composition in terms of water, lipid and protein or calcium using dual energy computed tomography. Characterization of plaque composition can potentially help distinguish vulnerable from stable plaques. Methods: Simulations studies were performed by the CT simulator based on ASTRA tomography toolbox. The beam energy for dual energy images was selected to be 80 kVp and 135 kVp. The radiation dose and energy spectrum for the CT simulator were carefully calibrated with respect to a 320-slice CT scanner. A digital chest phantom was constructed using Matlab for calibration and plaque measurement. Puremore » water, lipid, protein or calcium was used for calibration and a mixture of different volume percentages of these materials were used for validation purposes. Non-calcified plaque was simulated using water, lipid and protein with volumetric percentage range of 35%∼65%, 5%∼60% and 5%∼40%, respectively. Calcified plaque was simulated using water, lipid and calcium with volumetric percentage range of 50%∼80%, 8%∼45% and 3%∼13%, respectively. We employed iterative sinogram processing (ISP) to reduce the beam hardening effect in the simulation to improve the decomposition results. Results: The simulated known composition and dual energy decomposition results were in good agreement. Water, lipid and protein (calcium) mixtures were decomposed into water, lipid and protein (calcium) contents. The RMS errors of volumetric percentage for the water, lipid and protein (non-calcified plaque) decomposition, as compared to known values, were estimated to be approximately 5.74%, 2.54%, and 0.95% respectively. The RMS errors of volumetric percentage for the water, lipid and Calcium (calcified plaque) decomposition, as compared to known values, were estimated to be approximately 7.4%, 8.64%, and 0.08% respectively. Conclusion: The results of this study suggest that the dual energy decomposition can potentially be used to quantify the water, lipid, and protein or calcium composition of a plaque with relatively good accuracy. Grant funding from Toshiba Medical Systems and Philips Medical Systems.« less
Ion beam machining error control and correction for small scale optics.
Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi
2011-09-20
Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.
Online beam energy measurement of Beijing electron positron collider II linear accelerator
NASA Astrophysics Data System (ADS)
Wang, S.; Iqbal, M.; Liu, R.; Chi, Y.
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
Online beam energy measurement of Beijing electron positron collider II linear accelerator.
Wang, S; Iqbal, M; Liu, R; Chi, Y
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Warren G.; Jirasek, Andrew, E-mail: jirasek@uvic.ca; Wells, Derek M.
2014-11-01
Purpose: The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. Methods: A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm{sup 2} square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. Tomore » address structured errors, an iterative Savitzky–Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. Results: In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. Conclusions: This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.« less
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters.
Campbell, Warren G; Wells, Derek M; Jirasek, Andrew
2014-11-01
The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm(2) square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. To address structured errors, an iterative Savitzky-Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.
A Novel Method for Characterizing Beam Hardening Artifacts in Cone-beam Computed Tomographic Images.
Fox, Aaron; Basrani, Bettina; Kishen, Anil; Lam, Ernest W N
2018-05-01
The beam hardening (BH) artifact produced by root filling materials in cone-beam computed tomographic (CBCT) images is influenced by their radiologic K absorption edge values. The purpose of this study was to describe a novel technique to characterize BH artifacts in CBCT images produced by 3 root canal filling materials and to evaluate the effects of a zirconium (Zr)-based root filling material with a lower K edge (17.99 keV) on the production of BH artifacts. The palatal root canals of 3 phantom model teeth were prepared and root filled with gutta-percha (GP), a Zr root filling material, and calcium hydroxide paste. Each phantom tooth was individually imaged using the CS 9000 CBCT unit (Carestream, Atlanta, GA). The "light" and "dark" components of the BH artifacts were quantified separately using ImageJ software (National Institutes of Health, Bethesda, MD) in 3 regions of the root. Mixed-design analysis of variance was used to evaluate differences in the artifact area for the light and dark elements of the BH artifacts. A statistically significant difference in the area of the dark portion of the BH artifact was found between all fill materials and in all regions of the phantom tooth root (P < .05). GP generated a significantly greater dark but not light artifact area compared with Zr (P < .05). Moreover, statistically significant differences between the areas of both the light and dark artifacts were observed within all regions of the tooth root, with the greatest artifact being generated in the coronal third of the root (P < .001). Root canal filling materials with lower K edge material properties reduce BH artifacts along the entire length of the root canal and reduce the contribution of the dark artifact. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Investigation of chaos and its control in a Duffing-type nano beam model
NASA Astrophysics Data System (ADS)
Jha, Abhishek Kumar; Dasgupta, Sovan Sundar
2018-04-01
The prediction of chaos of a nano beam with harmonic excitation is investigated. Using the Galerkin method the nonlinear lumped model of a clamped-clamped nano beam with nonlinear cubic stiffness is obtained. This is a Duffing system with hardening type of nonlinearity. Based on the energy function and the phase portrait of the system, the resonator dynamics is categorized into four situations in which Using Malnikov function, an analytical criterion for homoclinic intersection in the form of inequality is written in terms of the system parameters. A numerical study including largest lyapunov exponent, Poincare diagram and phase portrait confirm the analytical prediction of chaos and effect of forcing amplitude. Subsequently, a linear velocity feedback controller is introduced into the system to successfully control the chaotic motion of the system at a faster rate at larger value of gain parameter.
NASA Astrophysics Data System (ADS)
Pankhurst, M. J.; Fowler, R.; Courtois, L.; Nonni, S.; Zuddas, F.; Atwood, R. C.; Davis, G. R.; Lee, P. D.
2018-01-01
We present new software allowing significantly improved quantitative mapping of the three-dimensional density distribution of objects using laboratory source polychromatic X-rays via a beam characterisation approach (c.f. filtering or comparison to phantoms). One key advantage is that a precise representation of the specimen material is not required. The method exploits well-established, widely available, non-destructive and increasingly accessible laboratory-source X-ray tomography. Beam characterisation is performed in two stages: (1) projection data are collected through a range of known materials utilising a novel hardware design integrated into the rotation stage; and (2) a Python code optimises a spectral response model of the system. We provide hardware designs for use with a rotation stage able to be tilted, yet the concept is easily adaptable to virtually any laboratory system and sample, and implicitly corrects the image artefact known as beam hardening.
NASA Astrophysics Data System (ADS)
Husain, Riyasat; Ghodke, A. D.
2017-08-01
Estimation and correction of the optics errors in an operational storage ring is always vital to achieve the design performance. To achieve this task, the most suitable and widely used technique, called linear optics from closed orbit (LOCO) is used in almost all storage ring based synchrotron radiation sources. In this technique, based on the response matrix fit, errors in the quadrupole strengths, beam position monitor (BPM) gains, orbit corrector calibration factors etc. can be obtained. For correction of the optics, suitable changes in the quadrupole strengths can be applied through the driving currents of the quadrupole power supplies to achieve the desired optics. The LOCO code has been used at the Indus-2 storage ring for the first time. The estimation of linear beam optics errors and their correction to minimize the distortion of linear beam dynamical parameters by using the installed number of quadrupole power supplies is discussed. After the optics correction, the performance of the storage ring is improved in terms of better beam injection/accumulation, reduced beam loss during energy ramping, and improvement in beam lifetime. It is also useful in controlling the leakage in the orbit bump required for machine studies or for commissioning of new beamlines.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Sakamoto, S; Kiger, W S; Harling, O K
1999-09-01
Sensitivity studies of epithermal neutron beam performance in boron neutron capture therapy are presented for realistic neutron beams with varying filter/moderator and collimator/delimiter designs to examine the relative importance of neutron beam spectrum, directionality, and size. Figures of merit for in-air and in-phantom beam performance are calculated via the Monte Carlo technique for different well-optimized designs of a fission converter-based epithermal neutron beam with head phantoms as the irradiation target. It is shown that increasing J/phi, a measure of beam directionality, does not always lead to corresponding monotonic improvements in beam performance. Due to the relatively low significance, for most configurations, of its effect on in-phantom performance and the large intensity losses required to produce beams with very high J/phi, beam directionality should not be considered an important figure of merit in epithermal neutron beam design except in terms of its consequences on patient positioning and collateral dose. Hardening the epithermal beam spectrum, while maintaining the specific fast neutron dose well below the inherent hydrogen capture dose, improves beam penetration and advantage depth and, as a desirable by-product, significantly increases beam intensity. Beam figures of merit are shown to be strongly dependent on beam size relative to target size. Beam designs with J/phi approximately 0.65-0.7, specific fast neutron doses of 2-2.6x10(-13) Gy cm2/n and beam sizes equal to or larger than the size of the head target produced the deepest useful penetration, highest therapeutic ratios, and highest intensities.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions
NASA Technical Reports Server (NTRS)
Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.;
2008-01-01
Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.
Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali
2013-02-01
Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
NASA Astrophysics Data System (ADS)
Humphries, T.; Winn, J.; Faridani, A.
2017-08-01
Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.
NASA Astrophysics Data System (ADS)
Sigurdardottir, Dorotea H.; Stearns, Jett; Glisic, Branko
2017-07-01
The deformed shape is a consequence of loading the structure and it is defined by the shape of the centroid line of the beam after deformation. The deformed shape is a universal parameter of beam-like structures. It is correlated with the curvature of the cross-section; therefore, any unusual behavior that affects the curvature is reflected through the deformed shape. Excessive deformations cause user discomfort, damage to adjacent structural members, and may ultimately lead to issues in structural safety. However, direct long-term monitoring of the deformed shape in real-life settings is challenging, and an alternative is indirect determination of the deformed shape based on curvature monitoring. The challenge of the latter is an accurate evaluation of error in the deformed shape determination, which is directly correlated with the number of sensors needed to achieve the desired accuracy. The aim of this paper is to study the deformed shape evaluated by numerical double integration of the monitored curvature distribution along the beam, and create a method to predict the associated errors and suggest the number of sensors needed to achieve the desired accuracy. The error due to the accuracy in the curvature measurement is evaluated within the scope of this work. Additionally, the error due to the numerical integration is evaluated. This error depends on the load case (i.e., the shape of the curvature diagram), the magnitude of curvature, and the density of the sensor network. The method is tested on a laboratory specimen and a real structure. In a laboratory setting, the double integration is in excellent agreement with the beam theory solution which was within the predicted error limits of the numerical integration. Consistent results are also achieved on a real structure—Streicker Bridge on Princeton University campus.
USSR Report Machine Tools and Metalworking Equipment.
1986-04-22
directors decided to teach the Bulat a new trade. This generator is now used to strengthen high-speed cutting mills by hardening them in a medium of...modules (GPM) and flexible production complexes ( GPK ). The flexible automated line is usually used for mass production of components. Here the...of programmable coordinates (x^ithout grip) 5 4 Method of programming teaching Memory capacity of robot system, points 300 Positioning error, mm
Improvements on the accuracy of beam bugs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.J.; Fessenden, T.
1998-08-17
At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less
Improvements on the accuracy of beam bugs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y J; Fessenden, T
1998-09-02
At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as "beam bugs", have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
Saito, Masatoshi
2010-08-01
This article describes the spectral optimization of dual-energy computed tomography using balanced filters (bf-DECT) to reduce the tube loadings and dose by dedicating to the acquisition of electron density information, which is essential for treatment planning in radiotherapy. For the spectral optimization of bf-DECT, the author calculated the beam-hardening error and air kerma required to achieve a desired noise level in an electron density image of a 50-cm-diameter cylindrical water phantom. The calculation enables the selection of beam parameters such as tube voltage, balanced filter material, and its thickness. The optimal combination of tube voltages was 80 kV/140 kV in conjunction with Tb/Hf and Bi/Mo filter pairs; this combination agrees with that obtained in a previous study [M. Saito, "Spectral optimization for measuring electron density by the dual-energy computed tomography coupled with balanced filter method," Med. Phys. 36, 3631-3642 (2009)], although the thicknesses of the filters that yielded a minimum tube output were slightly different from those obtained in the previous study. The resultant tube loading of a low-energy scan of the present bf-DECT significantly decreased from 57.5 to 4.5 times that of a high-energy scan for conventional DECT. Furthermore, the air kerma of bf-DECT could be reduced to less than that of conventional DECT, while obtaining the same figure of merit for the measurement of electron density and effective atomic number. The tube-loading and dose efficiencies of bf-DECT were considerably improved by sacrificing the quality of the noise level in the images of effective atomic number.
Ion beam figuring of Φ520mm convex hyperbolic secondary mirror
NASA Astrophysics Data System (ADS)
Meng, Xiaohui; Wang, Yonggang; Li, Ang; Li, Wenqing
2016-10-01
The convex hyperbolic secondary mirror is a Φ520-mm Zerodur lightweight hyperbolic convex mirror. Typically conventional methods like CCOS, stressed-lap polishing are used to manufacture this secondary mirror. Nevertheless, the required surface accuracy cannot be achieved through the use of conventional polishing methods because of the unpredictable behavior of the polishing tools, which leads to an unstable removal rate. Ion beam figuring is an optical fabrication method that provides highly controlled error of previously polished surfaces using a directed, inert and neutralized ion beam to physically sputter material from the optic surface. Several iterations with different ion beam size are selected and optimized to fit different stages of surface figure error and spatial frequency components. Before ion beam figuring, surface figure error of the secondary mirror is 2.5λ p-v, 0.23λ rms, and is improved to 0.12λ p-v, 0.014λ rms in several process iterations. The demonstration clearly shows that ion beam figuring can not only be used to the final correction of aspheric, but also be suitable for polishing the coarse surface of large, complex mirror.
Interferometric phase measurement techniques for coherent beam combining
NASA Astrophysics Data System (ADS)
Antier, Marie; Bourderionnet, Jérôme; Larat, Christian; Lallier, Eric; Primot, Jérôme; Brignon, Arnaud
2015-03-01
Coherent beam combining of fiber amplifiers provides an attractive mean of reaching high power laser. In an interferometric phase measurement the beams issued for each fiber combined are imaged onto a sensor and interfere with a reference plane wave. This registration of interference patterns on a camera allows the measurement of the exact phase error of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Based on this technique, several architectures can be proposed to coherently combine a high number of fibers. The first one based on digital holography transfers directly the image of the camera to spatial light modulator (SLM). The generated hologram is used to compensate the phase errors induced by the amplifiers. This architecture has therefore a collective phase measurement and correction. Unlike previous digital holography technique, the probe beams measuring the phase errors between the fibers are co-propagating with the phase-locked signal beams. This architecture is compatible with the use of multi-stage isolated amplifying fibers. In that case, only 20 pixels per fiber on the SLM are needed to obtain a residual phase shift error below λ/10rms. The second proposed architecture calculates the correction applied to each fiber channel by tracking the relative position of the interference finges. In this case, a phase modulator is placed on each channel. In that configuration, only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20rms, which demonstrates the scalability of this concept.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Poyner, Philip; Berg, Wesley; Thomas-Stahle, Jody
2007-01-01
Passive microwave rainfall estimates that exploit the emission signal of raindrops in the atmosphere are sensitive to the inhomogeneity of rainfall within the satellite field of view (FOV). In particular, the concave nature of the brightness temperature (T(sub b)) versus rainfall relations at frequencies capable of detecting the blackbody emission of raindrops cause retrieval algorithms to systematically underestimate precipitation unless the rainfall is homogeneous within a radiometer FOV, or the inhomogeneity is accounted for explicitly. This problem has a long history in the passive microwave community and has been termed the beam-filling error. While not a true error, correcting for it requires a priori knowledge about the actual distribution of the rainfall within the satellite FOV, or at least a statistical representation of this inhomogeneity. This study first examines the magnitude of this beam-filling correction when slant-path radiative transfer calculations are used to account for the oblique incidence of current radiometers. Because of the horizontal averaging that occurs away from the nadir direction, the beam-filling error is found to be only a fraction of what has been reported previously in the literature based upon plane-parallel calculations. For a FOV representative of the 19-GHz radiometer channel (18 km X 28 km) aboard the Tropical Rainfall Measuring Mission (TRMM), the mean beam-filling correction computed in this study for tropical atmospheres is 1.26 instead of 1.52 computed from plane-parallel techniques. The slant-path solution is also less sensitive to finescale rainfall inhomogeneity and is, thus, able to make use of 4-km radar data from the TRMM Precipitation Radar (PR) in order to map regional and seasonal distributions of observed rainfall inhomogeneity in the Tropics. The data are examined to assess the expected errors introduced into climate rainfall records by unresolved changes in rainfall inhomogeneity. Results show that global mean monthly errors introduced by not explicitly accounting for rainfall inhomogeneity do not exceed 0.5% if the beam-filling error is allowed to be a function of rainfall rate and freezing level and does not exceed 2% if a universal beam-filling correction is applied that depends only upon the freezing level. Monthly regional errors can be significantly larger. Over the Indian Ocean, errors as large as 8% were found if the beam-filling correction is allowed to vary with rainfall rate and freezing level while errors of 15% were found if a universal correction is used.
Planck 2013 results. VII. HFI time response and beams
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bowyer, J. W.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Haissinski, J.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacTavish, C. J.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matsumura, T.; Matthai, F.; Mazzotta, P.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polegre, A. M.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is theangular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5°) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five error eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multipole ℓ ~ 1500, and below 0.5% at 143 and 217 GHz up to ℓ ~ 2000.
Summary of Cosmic Ray Spectrum and Composition Below 1018 eV
NASA Astrophysics Data System (ADS)
Chiavassa, Andrea
In this contribution I will review the main results recently obtained in the study of the cosmic ray spectrum and composition below 1018 eV. The interest in this range is growing being related to the search of the knee of the iron component of cosmic ray and to the study of the transition between galactic and extra-galactic primaries. The all particle spectrum measured in this energy range is more structured than previously thought, showing some faint features: a hardening slightly above 1016 eV and a steepening below 1017 eV. The studies of the primary chemical composition are quickly evolving towards the measurements of the primary spectra of different mass groups: light and heavy primaries. A steepening of the heavy primary spectrum and a hardening of the light ones has been claimed. I will review these measurements and I will try to discuss the main sources of systematic errors still affecting them.
Electron Beam Focusing in the Linear Accelerator (linac)
NASA Astrophysics Data System (ADS)
Jauregui, Luis
2015-10-01
To produce consistent data with an electron accelerator, it is critical to have a well-focused beam. To keep the beam focused, quadrupoles (quads) are employed. Quads are magnets, which focus the beam in one direction (x or y) and defocus in the other. When two or more quads are used in series, a net focusing effect is achieved in both vertical and horizontal directions. At start up there is a 5% calibration error in the linac at Thomas Jefferson National Accelerator Facility. This means that the momentum of particles passing through the quads isn't always what is expected, which affects the focusing of the beam. The objective is to find exactly how sensitive the focusing in the linac is to this 5% error. A linac was simulated, which contained 290 RF Cavities with random electric fields (to simulate the 5% calibration error), and a total momentum kick of 1090 MeV. National Science Foundation, Department of Energy, Jefferson Lab, Old Dominion University.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter.
The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less
NASA Technical Reports Server (NTRS)
Kwon, Jin H.; Lee, Ja H.
1989-01-01
The far-field beam pattern and the power-collection efficiency are calculated for a multistage laser-diode-array amplifier consisting of about 200,000 5-W laser diode arrays with random distributions of phase and orientation errors and random diode failures. From the numerical calculation it is found that the far-field beam pattern is little affected by random failures of up to 20 percent of the laser diodes with reference of 80 percent receiving efficiency in the center spot. The random differences in phases among laser diodes due to probable manufacturing errors is allowed to about 0.2 times the wavelength. The maximum allowable orientation error is about 20 percent of the diffraction angle of a single laser diode aperture (about 1 cm). The preliminary results indicate that the amplifier could be used for space beam-power transmission with an efficiency of about 80 percent for a moderate-size (3-m-diameter) receiver placed at a distance of less than 50,000 km.
Statistics of the radiated field of a space-to-earth microwave power transfer system
NASA Technical Reports Server (NTRS)
Stevens, G. H.; Leininger, G.
1976-01-01
Statistics such as average power density pattern, variance of the power density pattern and variance of the beam pointing error are related to hardware parameters such as transmitter rms phase error and rms amplitude error. Also a limitation on spectral width of the phase reference for phase control was established. A 1 km diameter transmitter appears feasible provided the total rms insertion phase errors of the phase control modules does not exceed 10 deg, amplitude errors do not exceed 10% rms, and the phase reference spectral width does not exceed approximately 3 kHz. With these conditions the expected radiation pattern is virtually the same as the error free pattern, and the rms beam pointing error would be insignificant (approximately 10 meters).
Neural network approximation of nonlinearity in laser nano-metrology system based on TLMI
NASA Astrophysics Data System (ADS)
Olyaee, Saeed; Hamedi, Samaneh
2011-02-01
In this paper, an approach based on neural network (NN) for nonlinearity modeling in a nano-metrology system using three-longitudinal-mode laser heterodyne interferometer (TLMI) for length and displacement measurements is presented. We model nonlinearity errors that arise from elliptically and non-orthogonally polarized laser beams, rotational error in the alignment of laser head with respect to the polarizing beam splitter, rotational error in the alignment of the mixing polarizer, and unequal transmission coefficients in the polarizing beam splitter. Here we use a neural network algorithm based on the multi-layer perceptron (MLP) network. The simulation results show that multi-layer feed forward perceptron network is successfully applicable to real noisy interferometer signals.
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedam, S.; Docef, A.; Fix, M.
2005-06-15
The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effectsmore » of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.« less
Development and Testing of a Hydropneumatic Suspension System on a USMC AAV7A1
1991-07-30
original material, SAE 4140 steel alloy hardened to 30/34 Rc, has a yield strength of 130,000 psi. All of the ISU’s were disassembled and were reassembled...plugged and welded in place. Aluminum I-beams were welded in place in the water jet tunnels to act as jounce stops for the aft suspension units. The...following Is a tabulation of components attributed to the vehicle: 1000 Hull, Welded & machined 1100 Bow Plane 2000 Powertrain 3000 Transmission 4000
Simulation of Shear and Bending Cracking in RC Beam: Material Model and its Application to Impact
NASA Astrophysics Data System (ADS)
Mokhatar, S. N.; Sonoda, Y.; Zuki, S. S. M.; Kamarudin, A. F.; Noh, M. S. Md
2018-04-01
This paper presents a simple and reliable non-linear numerical analysis incorporated with fully Lagrangian method namely Smoothed Particle Hydrodynamics (SPH) to predict the impact response of the reinforced concrete (RC) beam under impact loading. The analysis includes the simulation of the effects of high mass low-velocity impact load falling on beam structures. Three basic ideas to present the localized failure of structural elements are: (1) the accurate strength of concrete and steel reinforcement during the short period (dynamic), Dynamic Increase Factor (DIF) has been employed for the effect of strain rate on the compression and tensile strength (2) linear pressure-sensitive yield criteria (Drucker-Prager type) with a new volume dependent Plane-Cap (PC) hardening in the pre-peak regime is assumed for the concrete, meanwhile, shear-strain energy criterion (Von-Mises) is applied to steel reinforcement (3) two kinds of constitutive equation are introduced to simulate the crushing and bending cracking of the beam elements. Then, these numerical analysis results were compared with the experimental test results.
Development of a heterogeneous laminating resin system
NASA Technical Reports Server (NTRS)
Biermann, T. F.; Hopper, L. C.
1985-01-01
The factors which effect the impact resistance of laminating resin systems and yet retain equivalent performance with the conventional 450 K curing epoxy matrix systems in other areas were studied. Formulation work was conducted on two systems, an all-epoxy and an epoxy/bismaleimide, to gain fundamental information on the effect formulation changes have upon neat resin and composite properties. The all-epoxy work involved formulations with various amounts and combinations of eight different epoxy resins, four different hardeners, fifteen different toughening agents, a filler, and a catalyst. The epoxy/bismaleimide effort improved formulations with various amounts and combinations of nine different resins, four different hardeners, eight different toughening agents, four different catalysts, and a filler. When a formulation appeared to offer the proper combination of properties required for a laminating resin Celion 3K-70P fabric was prepregged. Initial screening tests on composites primarily involved Gardner type impact and measurement of short beam shear strengths under dry and hot/wet conditions.
NASA Astrophysics Data System (ADS)
Morelle, X. P.; Chevalier, J.; Bailly, C.; Pardoen, T.; Lani, F.
2017-08-01
The nonlinear deformation and fracture of RTM6 epoxy resin is characterized as a function of strain rate and temperature under various loading conditions involving uniaxial tension, notched tension, uniaxial compression, torsion, and shear. The parameters of the hardening law depend on the strain-rate and temperature. The pressure-dependency and hardening law, as well as four different phenomenological failure criteria, are identified using a subset of the experimental results. Detailed fractography analysis provides insight into the competition between shear yielding and maximum principal stress driven brittle failure. The constitutive model and a stress-triaxiality dependent effective plastic strain based failure criterion are readily introduced in the standard version of Abaqus, without the need for coding user subroutines, and can thus be directly used as an input in multi-scale modeling of fibre-reinforced composite material. The model is successfully validated against data not used for the identification and through the full simulation of the crack propagation process in the V-notched beam shear test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhote, Sharvari, E-mail: sharvari.dhote@mail.utoronto.ca; Zu, Jean; Zhu, Yang
2015-04-20
In this paper, a nonlinear wideband multi-mode piezoelectric vibration-based energy harvester (PVEH) is proposed based on a compliant orthoplanar spring (COPS), which has an advantage of providing multiple vibration modes at relatively low frequencies. The PVEH is made of a tri-leg COPS flexible structure, where three fixed-guided beams are capable of generating strong nonlinear oscillations under certain base excitation. A prototype harvester was fabricated and investigated through both finite-element analysis and experiments. The frequency response shows multiple resonance which corresponds to a hardening type of nonlinear resonance. By adding masses at different locations on the COPS structure, the first threemore » vibration modes are brought close to each other, where the three hardening nonlinear resonances provide a wide bandwidth for the PVEH. The proposed PVEH has enhanced performance of the energy harvester in terms of a wide frequency bandwidth and a high-voltage output under base excitations.« less
Fellin, Francesco; Righetto, Roberto; Fava, Giovanni; Trevisan, Diego; Amelio, Dante; Farace, Paolo
2017-03-01
To investigate the range errors made in treatment planning due to the presence of the immobilization devices along the proton beam path. The measured water equivalent thickness (WET) of selected devices was measured by a high-energy spot and a multi-layer ionization chamber and compared with that predicted by treatment planning system (TPS). Two treatment couches, two thermoplastic masks (both un-stretched and stretched) and one headrest were selected. At TPS, every immobilization device was modelled as being part of the patient. The following parameters were assessed: CT acquisition protocol, dose-calculation grid-sizes (1.5 and 3.0mm) and beam-entrance with respect to the devices (coplanar and non-coplanar). Finally, the potential errors produced by a wrong manual separation between treatment couch and the CT table (not present during treatment) were investigated. In the thermoplastic mask, there was a clear effect due to beam entrance, a moderate effect due to the CT protocols and almost no effect due to TPS grid-size, with 1mm errors observed only when thick un-stretched portions were crossed by non-coplanar beams. In the treatment couches the WET errors were negligible (<0.3mm) regardless of the grid-size and CT protocol. The potential range errors produced in the manual separation between treatment couch and CT table were small with 1.5mm grid-size, but could be >0.5mm with a 3.0mm grid-size. In the headrest, WET errors were negligible (0.2mm). With only one exception (un-stretched mask, non-coplanar beams), the WET of all the immobilization devices was properly modelled by the TPS. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balcazar, Mario D.; Yonehara, Katsuya; Moretti, Alfred
Intense neutrino beam is a unique probe for researching beyond the standard model. Fermilab is the main institution to produce the most powerful and widespectrum neutrino beam. From that respective, a radiation robust beam diagnostic system is a critical element in order to maintain the quality of the neutrino beam. Within this context, a novel radiation-resistive beam profile monitor based on a gasfilled RF cavity is proposed. The goal of this measurement is to study a tunable Qfactor RF cavity to determine the accuracy of the RF signal as a function of the quality factor. Specifically, measurement error of themore » Q-factor in the RF calibration is investigated. Then, the RF system will be improved to minimize signal error.« less
NASA Astrophysics Data System (ADS)
Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao
2010-09-01
The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
Inelastic behavior of structural components
NASA Technical Reports Server (NTRS)
Hussain, N.; Khozeimeh, K.; Toridis, T. G.
1980-01-01
A more accurate procedure was developed for the determination of the inelastic behavior of structural components. The actual stress-strain curve for the mathematical of the structure was utilized to generate the force-deformation relationships for the structural elements, rather than using simplified models such as elastic-plastic, bilinear and trilinear approximations. relationships were generated for beam elements with various types of cross sections. In the generational of these curves, stress or load reversals, kinematic hardening and hysteretic behavior were taken into account. Intersections between loading and unloading branches were determined through an iterative process. Using the inelastic properties obtained, the plastic static response of some simple structural systems composed of beam elements was computed. Results were compared with known solutions, indicating a considerable improvement over response predictions obtained by means of simplified approximations used in previous investigations.
Worldwide Ocean Optics Database (WOOD)
2002-09-30
attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the computed results. Extensive algorithm...empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the...properties, including diffuse attenuation, beam attenuation, and scattering. Data from ONR-funded bio-optical cruises will be given priority for loading
Evaluation of three lidar scanning strategies for turbulence measurements
NASA Astrophysics Data System (ADS)
Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.
2015-11-01
Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.
Evaluation of three lidar scanning strategies for turbulence measurements
NASA Astrophysics Data System (ADS)
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas
2016-05-01
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.
Self-referenced locking of optical coherence by single-detector electronic-frequency tagging
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Spring, Justin; Ward, Benjamin; Ghebremichael, F.; Culpepper, Mark A.; Sanchez, Anthony D.; Baker, J. T.; Pilkington, D.; Berdine, Richard
2006-02-01
We report a novel coherent beam combining technique. This is the first actively phase locked optical fiber array that eliminates the need for a separate reference beam. In addition, only a single photodetector is required. The far-field central spot of the array is imaged onto the photodetector to produce the phase control loop signals. Each leg of the fiber array is phase modulated with a separate RF frequency, thus tagging the optical phase shift for each leg by a separate RF frequency. The optical phase errors for the individual array legs are separated in the electronic domain. In contrast with the previous active phase locking techniques, in our system the reference beam is spatially overlapped with all the RF modulated fiber leg beams onto a single detector. The phase shift between the optical wave in the reference leg and in the RF modulated legs is measured separately in the electronic domain and the phase error signal is feedback to the LiNbO 3 phase modulator for that leg to minimize the phase error for that leg relative to the reference leg. The advantages of this technique are 1) the elimination of the reference beam and beam combination optics and 2) the electronic separation of the phase error signals without any degradation of the phase locking accuracy. We will present the first theoretical model for self-referenced LOCSET and describe experimental results for a 3 x 3 array.
Five-Year Wilkinson Microwave Anisotropy Probe Observations: Beam Maps and Window Functions
NASA Astrophysics Data System (ADS)
Hill, R. S.; Weiland, J. L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C. L.; Halpern, M.; Page, L.; Dunkley, J.; Gold, B.; Jarosik, N.; Kogut, A.; Limon, M.; Nolta, M. R.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.
2009-02-01
Cosmology and other scientific results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of ~2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of ~1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of ~2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly, errors in the measured disk temperature are ~0.5%. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
FEL Trajectory Analysis for the VISA Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter
1998-10-06
The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less
Conditions for the optical wireless links bit error ratio determination
NASA Astrophysics Data System (ADS)
Kvíčala, Radek
2017-11-01
To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Dromgoole, L; Alvarez, P
Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less
SRAM Based Re-programmable FPGA for Space Applications
NASA Technical Reports Server (NTRS)
Wang, J. J.; Sun, J. S.; Cronquist, B. E.; McCollum, J. L.; Speers, T. M.; Plants, W. C.; Katz, R. B.
1999-01-01
An SRAM (static random access memory)-based reprogrammable FPGA (field programmable gate array) is investigated for space applications. A new commercial prototype, named the RS family, was used as an example for the investigation. The device is fabricated in a 0.25 micrometers CMOS technology. Its architecture is reviewed to provide a better understanding of the impact of single event upset (SEU) on the device during operation. The SEU effect of different memories available on the device is evaluated. Heavy ion test data and SPICE simulations are used integrally to extract the threshold LET (linear energy transfer). Together with the saturation cross-section measurement from the layout, a rate prediction is done on each memory type. The SEU in the configuration SRAM is identified as the dominant failure mode and is discussed in detail. The single event transient error in combinational logic is also investigated and simulated by SPICE. SEU mitigation by hardening the memories and employing EDAC (error detection and correction) at the device level are presented. For the configuration SRAM (CSRAM) cell, the trade-off between resistor de-coupling and redundancy hardening techniques are investigated with interesting results. Preliminary heavy ion test data show no sign of SEL (single event latch-up). With regard to ionizing radiation effects, the increase in static leakage current (static I(sub CC)) measured indicates a device tolerance of approximately 50krad(Si).
Beam collimation and focusing and error analysis of LD and fiber coupling system based on ZEMAX
NASA Astrophysics Data System (ADS)
Qiao, Lvlin; Zhou, Dejian; Xiao, Lei
2017-10-01
Laser diodde has many advantages, such as high efficiency, small volume, low cost and easy integration, so it is widely used. Because of its poor beam quality, the application of semiconductor laser has also been seriously hampered. In view of the poor beam quality, the ZEMAX optical design software is used to simulate the far field characteristics of the semiconductor laser beam, and the coupling module of the semiconductor laser and the optical fiber is designed and optimized. And the beam is coupled into the fiber core diameter d=200µm, the numerical aperture NA=0.22 optical fiber, the output power can reach 95%. Finally, the influence of the three docking errors on the coupling efficiency during the installation process is analyzed.
Analysis of frequency mixing error on heterodyne interferometric ellipsometry
NASA Astrophysics Data System (ADS)
Deng, Yuan-long; Li, Xue-jin; Wu, Yu-bin; Hu, Ju-guang; Yao, Jian-quan
2007-11-01
A heterodyne interferometric ellipsometer, with no moving parts and a transverse Zeeman laser, is demonstrated. The modified Mach-Zehnder interferometer characterized as a separate frequency and common-path configuration is designed and theoretically analyzed. The experimental data show a fluctuation mainly resulting from the frequency mixing error which is caused by the imperfection of polarizing beam splitters (PBS), the elliptical polarization and non-orthogonality of light beams. The producing mechanism of the frequency mixing error and its influence on measurement are analyzed with the Jones matrix method; the calculation indicates that it results in an error up to several nanometres in the thickness measurement of thin films. The non-orthogonality has no contribution to the phase difference error when it is relatively small; the elliptical polarization and the imperfection of PBS have a major effect on the error.
Chang, Suyon; Han, Kyunghwa; Youn, Jong-Chan; Im, Dong Jin; Kim, Jin Young; Suh, Young Joo; Hong, Yoo Jin; Hur, Jin; Kim, Young Jin; Choi, Byoung Wook; Lee, Hye-Jeong
2018-05-01
Purpose To investigate the diagnostic utility of dual-energy computed tomography (CT)-based monochromatic imaging for myocardial delayed enhancement (MDE) assessment in patients with cardiomyopathy. Materials and Methods The institutional review board approved this prospective study, and informed consent was obtained from all participants who were enrolled in the study. Forty patients (27 men and 13 women; mean age, 56 years ± 15 [standard deviation]; age range, 22-81 years) with cardiomyopathy underwent cardiac magnetic resonance (MR) imaging and dual-energy CT. Conventional (120-kV) and monochromatic (60-, 70-, and 80-keV) images were reconstructed from the dual-energy CT acquisition. Subjective quality score, contrast-to-noise ratio (CNR), and beam-hardening artifacts were compared pairwise with the Friedman test at post hoc analysis. With cardiac MR imaging as the reference standard, diagnostic performance of dual-energy CT in MDE detection and its predictive ability for pattern classification were compared pairwise by using logistic regression analysis with the generalized estimating equation in a per-segment analysis. The Bland-Altman method was used to find agreement between cardiac MR imaging and CT in MDE quantification. Results Among the monochromatic images, 70-keV CT images resulted in higher subjective quality (mean score, 3.38 ± 0.54 vs 3.15 ± 0.43; P = .0067), higher CNR (mean, 4.26 ± 1.38 vs 3.93 ± 1.33; P = .0047), and a lower value for beam-hardening artifacts (mean, 3.47 ± 1.56 vs 4.15 ± 1.67; P < .0001) when compared with conventional CT. When compared with conventional CT, 70-keV CT showed improved diagnostic performance for MDE detection (sensitivity, 94.6% vs 90.4% [P = .0032]; specificity, 96.0% vs 94.0% [P = .0031]; and accuracy, 95.6% vs 92.7% [P < .0001]) and improved predictive ability for pattern classification (subendocardial, 91.5% vs 84.3% [P = .0111]; epicardial, 94.3% vs 73.5% [P = .0001]; transmural, 93.0% vs 77.7% [P = .0018]; mesocardial, 85.4% vs 69.2% [P = .0047]; and patchy. 84.4% vs 78.4% [P = .1514]). For MDE quantification, 70-keV CT showed a small bias 0.1534% (95% limits of agreement: -4.7013, 5.0080). Conclusion Dual-energy CT-based 70-keV monochromatic images improve MDE assessment in patients with cardiomyopathy via improved image quality and CNR and reduced beam-hardening artifacts when compared with conventional CT images. © RSNA, 2017 Online supplemental material is available for this article.
Paudel, M; MacKenzie, M; Fallone, B; Rathee, S
2012-06-01
To evaluate the performance of a model based image reconstruction in reducing metal artifacts in MVCT systems, and to compare with filtered-back projection (FBP) technique. Iterative maximum likelihood polychromatic algorithm for CT (IMPACT) is used with pair/triplet production process and the energy dependent response of detectors. The beam spectra for in-house bench-top and TomotherapyTM MVCT are modelled for use in IMPACT. The energy dependent gain of detectors is calculated using a constrained optimization technique and measured attenuation produced by 0 - 24 cm thick solid water slabs. A cylindrical (19 cm diameter) plexiglass phantom containing various central cylindrical inserts (relative electron density of 0.28-1.69) between two steel rods (2 cm diameter) is scanned in the bench-top [the bremsstrahlung radiation from 6 MeV electron beam passed through 4 cm solid water on the Varian Clinac 2300C] and TomotherapyTM MVCTs. The FBP reconstructs images from raw signal normalised to air scan and corrected for beam hardening using a uniform plexi-glass cylinder (20 cm diameter). IMPACT starts with FBP reconstructed seed image and reconstructs final image at 1.25 MeV in 150 iterations. FBP produces a visible dark shading in the image between two steel rods that becomes darker with higher density central insert causing 5-8 % underestimation of electron density compared to the case without the steel rods. In the IMPACT image the dark shading connecting the steel rods is nearly removed and the uniform background restored. The average attenuation coefficients of the inserts and the background are very close to the corresponding theoretical values at 1.25 MeV. The dark shading metal artifact due to beam hardening can be removed in MVCT using the iterative reconstruction algorithm such as IMPACT. However, the accurate modelling of detectors' energy dependent response and physical processes are crucial for successful implementation. Funding support for the research is obtained from "Vanier Canada Graduate Scholarship" and "Canadian Institute of Health Research". © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Cauchi, Marija; Assmann, R. W.; Bertarelli, A.; Carra, F.; Lari, L.; Rossi, A.; Mollicone, P.; Sammut, N.
2015-02-01
The correct functioning of a collimation system is crucial to safely and successfully operate high-energy particle accelerators, such as the Large Hadron Collider (LHC). However, the requirements to handle high-intensity beams can be demanding, and accident scenarios must be well studied in order to assess if the collimator design is robust against possible error scenarios. One of the catastrophic, though not very probable, accident scenarios identified within the LHC is an asynchronous beam dump. In this case, one (or more) of the 15 precharged kicker circuits fires out of time with the abort gap, spraying beam pulses onto LHC machine elements before the machine protection system can fire the remaining kicker circuits and bring the beam to the dump. If a proton bunch directly hits a collimator during such an event, severe beam-induced damage such as magnet quenches and other equipment damage might result, with consequent downtime for the machine. This study investigates a number of newly defined jaw error cases, which include angular misalignment errors of the collimator jaw. A numerical finite element method approach is presented in order to precisely evaluate the thermomechanical response of tertiary collimators to beam impact. We identify the most critical and interesting cases, and show that a tilt of the jaw can actually mitigate the effect of an asynchronous dump on the collimators. Relevant collimator damage limits are taken into account, with the aim to identify optimal operational conditions for the LHC.
Optical ground station optimization for future optical geostationary satellite feeder uplinks
NASA Astrophysics Data System (ADS)
Camboulives, A.-R.; Velluet, M.-T.; Poulenard, S.; Saint-Antonin, L.; Michau, V.
2017-02-01
An optical link based on a multiplex of wavelengths at 1:55 μm is foreseen to be a valuable alternative to the conventional radio-frequencies for the feeder link of the next-generation of high throughput geostationary satellite. Considering the limited power of lasers envisioned for feeder links, the beam divergence has to be dramatically reduced. Consequently, the beam pointing becomes a key issue. During its propagation between the ground station and a geostationary satellite, the optical beam is deflected (beam wandering), and possibly distorted (beam spreading), by atmospheric turbulence. It induces strong fluctuations of the detected telecom signal, thus increasing the bit error rate (BER). A steering mirror using a measurement from a beam coming from the satellite is used to pre-compensate the deflection. Because of the point-ahead angle between the downlink and the uplink, the turbulence effects experienced by both beams are slightly different, inducing an error in the correction. This error is characterized as a function of the turbulence characteristics as well as of the terminal characteristics, such as the servo-loop bandwidth or the beam diameter, and is included in the link budget. From this result, it is possible to predict intensity fluctuations detected by the satellite statistically (mean intensity, scintillation index, probability of fade, etc.)). The final objective is to optimize the different parameters of an optical ground station capable of mitigating the impact of atmospheric turbulence on the uplink in order to be compliant with the targeted capacity (1Terabit/s by 2025).
Self-Nulling Beam Combiner Using No External Phase Inverter
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.
2010-01-01
A self-nulling beam combiner is proposed that completely eliminates the phase inversion subsystem from the nulling interferometer, and instead uses the intrinsic phase shifts in the beam splitters. Simplifying the flight instrument in this way will be a valuable enhancement of mission reliability. The tighter tolerances on R = T (R being reflection and T being transmission coefficients) required by the self-nulling configuration actually impose no new constraints on the architecture, as two adaptive nullers must be situated between beam splitters to correct small errors in the coatings. The new feature is exploiting the natural phase shifts in beam combiners to achieve the 180 phase inversion necessary for nulling. The advantage over prior art is that an entire subsystem, the field-flipping optics, can be eliminated. For ultimate simplicity in the flight instrument, one might fabricate coatings to very high tolerances and dispense with the adaptive nullers altogether, with all their moving parts, along with the field flipper subsystem. A single adaptive nuller upstream of the beam combiner may be required to correct beam train errors (systematic noise), but in some circumstances phase chopping reduces these errors substantially, and there may be ways to further reduce the chop residuals. Though such coatings are beyond the current state of the art, the mechanical simplicity and robustness of a flight system without field flipper or adaptive nullers would perhaps justify considerable effort on coating fabrication.
Thin film absorption characterization by focus error thermal lensing
NASA Astrophysics Data System (ADS)
Domené, Esteban A.; Schiltz, Drew; Patel, Dinesh; Day, Travis; Jankowska, E.; Martínez, Oscar E.; Rocca, Jorge J.; Menoni, Carmen S.
2017-12-01
A simple, highly sensitive technique for measuring absorbed power in thin film dielectrics based on thermal lensing is demonstrated. Absorption of an amplitude modulated or pulsed incident pump beam by a thin film acts as a heat source that induces thermal lensing in the substrate. A second continuous wave collimated probe beam defocuses after passing through the sample. Determination of absorption is achieved by quantifying the change of the probe beam profile at the focal plane using a four-quadrant detector and cylindrical lenses to generate a focus error signal. This signal is inherently insensitive to deflection, which removes noise contribution from point beam stability. A linear dependence of the focus error signal on the absorbed power is shown for a dynamic range of over 105. This technique was used to measure absorption loss in dielectric thin films deposited on fused silica substrates. In pulsed configuration, a single shot sensitivity of about 20 ppm is demonstrated, providing a unique technique for the characterization of moving targets as found in thin film growth instrumentation.
A method for photon beam Monte Carlo multileaf collimator particle transport
NASA Astrophysics Data System (ADS)
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
SU-F-T-372: Surface and Peripheral Dose in Compensator-Based FFF Beam IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D; Feygelman, V; Moros, E
2016-06-15
Purpose: Flattening filter free (FFF) beams produce higher dose rates. Combined with compensator IMRT techniques, the dose delivery for each beam can be much shorter compared to the flattened beam MLC-based or compensator-based IMRT. This ‘snap shot’ IMRT delivery is beneficial to patients for tumor motion management. Due to softer energy, surface doses in FFF beam treatment are usually higher than those from flattened beams. Because of less scattering due to no flattening filter, peripheral doses are usually lower in FFF beam treatment. However, in compensator-based IMRT using FFF beams, the compensator is in the beam pathway. Does it introducemore » beam hardening effects and scattering such that the surface dose is lower and peripheral dose is higher compared to FFF beam MLC-based IMRT? Methods: This study applied Monte Carlo techniques to investigate the surface and peripheral doses in compensator-based IMRT using FFF beams and compared it to the MLC-based IMRT using FFF beams and flattened beams. Besides various thicknesses of copper slabs to simulate various thicknesses of compensators, a simple cone-shaped compensator was simulated to mimic a clinical application. The dose distribution in water phantom by the cone-shaped compensator was then simulated by multiple MLC defined FFF and flattened beams with various openings. After normalized to Dmax, the surface and peripheral dose was compared between the FFF beam compensator-based IMRT and FFF/flattened beam MLC-based IMRT. Results: The surface dose at the central 0.5mm depth was close between the compensator and 6FFF MLC dose distributions, and about 8% (of Dmax) higher than the flattened 6MV MLC dose. At 8cm off axis at dmax, the peripheral dose between the 6FFF and flattened 6MV MLC demonstrated similar doses, while the compensator dose was about 1% higher. Conclusion: Compensator does not reduce the surface doses but slightly increases the peripheral doses due to scatter inside compensator.« less
Methods for multiple-telescope beam imaging and guiding in the near-infrared
NASA Astrophysics Data System (ADS)
Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.
2018-05-01
Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.
Evaluation of three lidar scanning strategies for turbulence measurements
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...
2016-05-03
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less
Evaluation of three lidar scanning strategies for turbulence measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less
[Accurate 3D free-form registration between fan-beam CT and cone-beam CT].
Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun
2012-06-01
Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.
Experimental investigation of optimum beam size for FSO uplink
NASA Astrophysics Data System (ADS)
Kaushal, Hemani; Kaddoum, Georges; Jain, Virander Kumar; Kar, Subrat
2017-10-01
In this paper, the effect of transmitter beam size on the performance of free space optical (FSO) communication has been determined experimentally. Irradiance profile for varying turbulence strength is obtained using optical turbulence generating (OTG) chamber inside laboratory environment. Based on the results, an optimum beam size is investigated using the semi-analytical method. Moreover, the combined effects of atmospheric scintillation and beam wander induced pointing errors are considered in order to determine the optimum beam size that minimizes the bit error rate (BER) of the system for a fixed transmitter power and link length. The results show that the optimum beam size for FSO uplink depends upon Fried parameter and outer scale of the turbulence. Further, it is observed that the optimum beam size increases with the increase in zenith angle but has negligible effect with the increase in fade threshold level at low turbulence levels and has a marginal effect at high turbulence levels. Finally, the obtained outcome is useful for FSO system design and BER performance analysis.
Quantization of liver tissue in dual kVp computed tomography using linear discriminant analysis
NASA Astrophysics Data System (ADS)
Tkaczyk, J. Eric; Langan, David; Wu, Xiaoye; Xu, Daniel; Benson, Thomas; Pack, Jed D.; Schmitz, Andrea; Hara, Amy; Palicek, William; Licato, Paul; Leverentz, Jaynne
2009-02-01
Linear discriminate analysis (LDA) is applied to dual kVp CT and used for tissue characterization. The potential to quantitatively model both malignant and benign, hypo-intense liver lesions is evaluated by analysis of portal-phase, intravenous CT scan data obtained on human patients. Masses with an a priori classification are mapped to a distribution of points in basis material space. The degree of localization of tissue types in the material basis space is related to both quantum noise and real compositional differences. The density maps are analyzed with LDA and studied with system simulations to differentiate these factors. The discriminant analysis is formulated so as to incorporate the known statistical properties of the data. Effective kVp separation and mAs relates to precision of tissue localization. Bias in the material position is related to the degree of X-ray scatter and partial-volume effect. Experimental data and simulations demonstrate that for single energy (HU) imaging or image-based decomposition pixel values of water-like tissues depend on proximity to other iodine-filled bodies. Beam-hardening errors cause a shift in image value on the scale of that difference sought between in cancerous and cystic lessons. In contrast, projection-based decomposition or its equivalent when implemented on a carefully calibrated system can provide accurate data. On such a system, LDA may provide novel quantitative capabilities for tissue characterization in dual energy CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyazgin, Alexander, E-mail: lyazgin@list.ru; Shugurov, Artur, E-mail: shugurov@ispms.tsc.ru; Sergeev, Viktor, E-mail: retc@ispms.tsc.ru
The effect of bombardment of the Ni-B sublayer by Zr ion beams on the surface morphology and tribomechanical properties of Au-Ni coatings was investigated. It was found that the treatment has no significant effect on the surface roughness and grain size of the Au-Ni coatings, while it provides essential reducing of their friction coefficient and improvement of wear resistance. It is shown that increased wear resistance of these coatings was caused by their strain hardening resulted from localization of plastic strain. The optimal Zr fluence were determined that provide the maximum reduction of linear wear of the coatings.
NASA Astrophysics Data System (ADS)
Konovalenko, Igor S.; Shilko, Evgeny V.; Ovcharenko, Vladimir E.; Psakhie, Sergey G.
2017-12-01
The paper presents the movable cellular automaton method. It is based on numerical models of surface layers of the metal-ceramic composite NiCr-TiC modified under electron beam irradiation in inert gas plasmas. The models take into account different geometric, concentration and mechanical parameters of ceramic and metallic components. The authors study the contributions of key structural factors in mechanical properties of surface layers and determine the ranges of their variations by providing the optimum balance of strength, strain hardening and fracture toughness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Li; Gu, Chun; Xu, Lixin, E-mail: xulixin@ustc.edu.cn
The self-adapting algorithms are improved to optimize a beam configuration in the direct drive laser fusion system with the solid state lasers. A configuration of 32 laser beams is proposed for achieving a high uniformity illumination, with a root-mean-square deviation at 10{sup −4} level. In our optimization, the parameters such as beam number, beam arrangement, and beam intensity profile are taken into account. The illumination uniformity robustness versus the parameters such as intensity profile deviations, power imbalance, intensity profile noise, the pointing error, and the target position error is also discussed. In this study, the model is assumed a solid-spheremore » illumination, and refraction effects of incident light on the corona are not considered. Our results may have a potential application in the design of the direct-drive laser fusion of the Shen Guang-II Upgrading facility (SG-II-U, China).« less
Jani, Shyam S; Low, Daniel A; Lamb, James M
2015-01-01
To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Peng, Te; Yang, Yangyang; Ma, Lina; Yang, Huayong
2016-10-01
A sensor system based on fiber Bragg grating (FBG) is presented which is to estimate the deflection of a lightweight flexible beam, including the tip position and the tip rotation angle. In this paper, the classical problem of the deflection of a lightweight flexible beam of linear elastic material is analysed. We present the differential equation governing the behavior of a physical system and show that this equation although straightforward in appearance, is in fact rather difficult to solve due to the presence of a non-linear term. We used epoxy glue to attach the FBG sensors to specific locations upper and lower surface of the beam in order to measure local strain measurements. A quasi-distributed FBG static strain sensor network is designed and established. The estimation results from FBG sensors are also compared to reference displacements from the ANSYS simulation results and the experimental results obtained in the laboratory in the static case. The errors of the estimation by FBG sensors are analysed for further error-correction and option-design. When the load weight is 20g, the precision is the highest, the position errors ex and ex are 0.19%, 0.14% respectively, the rotation error eθ, is 1.23%.
SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J
2015-06-15
Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
High Power High Efficiency Diode Laser Stack for Processing
NASA Astrophysics Data System (ADS)
Gu, Yuanyuan; Lu, Hui; Fu, Yueming; Cui, Yan
2018-03-01
High-power diode lasers based on GaAs semiconductor bars are well established as reliable and highly efficient laser sources. As diode laser is simple in structure, small size, longer life expectancy with the advantages of low prices, it is widely used in the industry processing, such as heat treating, welding, hardening, cladding and so on. Respectively, diode laser could make it possible to establish the practical application because of rectangular beam patterns which are suitable to make fine bead with less power. At this power level, it can have many important applications, such as surgery, welding of polymers, soldering, coatings and surface treatment of metals. But there are some applications, which require much higher power and brightness, e.g. hardening, key hole welding, cutting and metal welding. In addition, High power diode lasers in the military field also have important applications. So all developed countries have attached great importance to high-power diode laser system and its applications. This is mainly due their low performance. In this paper we will introduce the structure and the principle of the high power diode stack.
Read disturb errors in a CMOS static RAM chip. [radiation hardened for spacedraft
NASA Technical Reports Server (NTRS)
Wood, Steven H.; Marr, James C., IV; Nguyen, Tien T.; Padgett, Dwayne J.; Tran, Joe C.; Griswold, Thomas W.; Lebowitz, Daniel C.
1989-01-01
Results are reported from an extensive investigation into pattern-sensitive soft errors (read disturb errors) in the TCC244 CMOS static RAM chip. The TCC244, also known as the SA2838, is a radiation-hard single-event-upset-resistant 4 x 256 memory chip. This device is being used by the Jet Propulsion Laboratory in the Galileo and Magellan spacecraft, which will have encounters with Jupiter and Venus, respectively. Two aspects of the part's design are shown to result in the occurrence of read disturb errors: the transparence of the signal path from the address pins to the array of cells, and the large resistance in the Vdd and Vss lines of the cells in the center of the array. Probe measurements taken during a read disturb failure illustrate how address skews and the data pattern in the chip combine to produce a bit flip. A capacitive charge pump formed by the individual cell capacitances and the resistance in the supply lines pumps down both the internal cell voltage and the local supply voltage until a bit flip occurs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feygelman, Vladimir; Department of Physics, University of Manitoba, Winnipeg, MB; Mandelzweig, Yuri
2015-01-15
Matching electron beams without secondary collimators (applicators) were used for treatment of extensive, recurrent chest-wall carcinoma. Due to the wide penumbra of such beams, the homogeneity of the dose distribution at and around the junction point is clinically acceptable and relatively insensitive to positional errors. Specifically, dose around the junction point is homogeneous to within ±4% as calculated from beam profiles, while the positional error of 1 cm leaves this number essentially unchanged. The experimental isodose distribution in an anthropomorphic phantom supports this conclusion. Two electron beams with wide penumbra were used to cover the desired treatment area with satisfactorymore » dose homogeneity. The technique is relatively simple yet clinically useful and can be considered a viable alternative for treatment of extensive chest-wall disease. The steps are suggested to make this technique more universal.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neben, Abraham R.; Hewitt, Jacqueline N.; Dillon, Joshua S.
2016-03-20
Accurate antenna beam models are critical for radio observations aiming to isolate the redshifted 21 cm spectral line emission from the Dark Ages and the Epoch of Reionization (EOR) and unlock the scientific potential of 21 cm cosmology. Past work has focused on characterizing mean antenna beam models using either satellite signals or astronomical sources as calibrators, but antenna-to-antenna variation due to imperfect instrumentation has remained unexplored. We characterize this variation for the Murchison Widefield Array (MWA) through laboratory measurements and simulations, finding typical deviations of the order of ±10%–20% near the edges of the main lobe and in themore » sidelobes. We consider the ramifications of these results for image- and power spectrum-based science. In particular, we simulate visibilities measured by a 100 m baseline and find that using an otherwise perfect foreground model, unmodeled beam-forming errors severely limit foreground subtraction accuracy within the region of Fourier space contaminated by foreground emission (the “wedge”). This region likely contains much of the cosmological signal, and accessing it will require measurement of per-antenna beam patterns. However, unmodeled beam-forming errors do not contaminate the Fourier space region expected to be free of foreground contamination (the “EOR window”), showing that foreground avoidance remains a viable strategy.« less
Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A
2018-05-01
The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.
Multi-kW coherent combining of fiber lasers seeded with pseudo random phase modulated light
NASA Astrophysics Data System (ADS)
Flores, Angel; Ehrehreich, Thomas; Holten, Roger; Anderson, Brian; Dajani, Iyad
2016-03-01
We report efficient coherent beam combining of five kilowatt-class fiber amplifiers with a diffractive optical element (DOE). Based on a master oscillator power amplifier (MOPA) configuration, the amplifiers were seeded with pseudo random phase modulated light. Each non-polarization maintaining fiber amplifier was optically path length matched and provides approximately 1.2 kW of near diffraction-limited output power (measured M2<1.1). Consequently, a low power sample of each laser was utilized for active linear polarization control. A low power sample of the combined beam after the DOE provided an error signal for active phase locking which was performed via Locking of Optical Coherence by Single-Detector Electronic-Frequency Tagging (LOCSET). After phase stabilization, the beams were coherently combined via the 1x5 DOE. A total combined output power of 4.9 kW was achieved with 82% combining efficiency and excellent beam quality (M2<1.1). The intrinsic DOE splitter loss was 5%. Similarly, losses due in part to non-ideal polarization, ASE content, uncorrelated wavefront errors, and misalignment errors contributed to the efficiency reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarazona, David; Berz, Martin; Hipple, Robert
The main goal of the Muon g-2 Experiment (g-2) at Fermilab is to measure the muon anomalous magnetic moment to unprecedented precision. This new measurement will allow to test the completeness of the Standard Model (SM) and to validate other theoretical models beyond the SM. The close interplay of the understanding of particle beam dynamics and the preparation of the beam properties with the experimental measurement is tantamount to the reduction of systematic errors in the determination of the muon anomalous magnetic moment. We describe progress in developing detailed calculations and modeling of the muon beam delivery system in ordermore » to obtain a better understanding of spin-orbit correlations, nonlinearities, and more realistic aspects that contribute to the systematic errors of the g-2 measurement. Our simulation is meant to provide statistical studies of error effects and quick analyses of running conditions for when g-2 is taking beam, among others. We are using COSY, a differential algebra solver developed at Michigan State University that will also serve as an alternative to compare results obtained by other simulation teams of the g-2 Collaboration.« less
NASA Astrophysics Data System (ADS)
Henry, William; Jefferson Lab Hall A Collaboration
2017-09-01
Jefferson Lab's cutting-edge parity-violating electron scattering program has increasingly stringent requirements for systematic errors. Beam polarimetry is often one of the dominant systematic errors in these experiments. A new Møller Polarimeter in Hall A of Jefferson Lab (JLab) was installed in 2015 and has taken first measurements for a polarized scattering experiment. Upcoming parity violation experiments in Hall A include CREX, PREX-II, MOLLER and SOLID with the latter two requiring <0.5% precision on beam polarization measurements. The polarimeter measures the Møller scattering rates of the polarized electron beam incident upon an iron target placed in a saturating magnetic field. The spectrometer consists of four focusing quadrapoles and one momentum selection dipole. The detector is designed to measure the scattered and knock out target electrons in coincidence. Beam polarization is extracted by constructing an asymmetry from the scattering rates when the incident electron spin is parallel and anti-parallel to the target electron spin. Initial data will be presented. Sources of systematic errors include target magnetization, spectrometer acceptance, the Levchuk effect, and radiative corrections which will be discussed. National Science Foundation.
Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R
2011-02-01
Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.
Accuracy and Landmark Error Calculation Using Cone-Beam Computed Tomography–Generated Cephalograms
Grauer, Dan; Cevidanes, Lucia S. H.; Styner, Martin A.; Heulfe, Inam; Harmon, Eric T.; Zhu, Hongtu; Proffit, William R.
2010-01-01
Objective To evaluate systematic differences in landmark position between cone-beam computed tomography (CBCT)–generated cephalograms and conventional digital cephalograms and to estimate how much variability should be taken into account when both modalities are used within the same longitudinal study. Materials and Methods Landmarks on homologous cone-beam computed tomographic–generated cephalograms and conventional digital cephalograms of 46 patients were digitized, registered, and compared via the Hotelling T2 test. Results There were no systematic differences between modalities in the position of most landmarks. Three landmarks showed statistically significant differences but did not reach clinical significance. A method for error calculation while combining both modalities in the same individual is presented. Conclusion In a longitudinal follow-up for assessment of treatment outcomes and growth of one individual, the error due to the combination of the two modalities might be larger than previously estimated. PMID:19905853
Reduction of Non-uniform Beam Filling Effects by Vertical Decorrelation: Theory and Simulations
NASA Technical Reports Server (NTRS)
Short, David; Nakagawa, Katsuhiro; Iguchi, Toshio
2013-01-01
Algorithms for estimating precipitation rates from spaceborne radar observations of apparent radar reflectivity depend on attenuation correction procedures. The algorithm suite for the Ku-band precipitation radar aboard the Tropical Rainfall Measuring Mission satellite is one such example. The well-known problem of nonuniform beam filling is a source of error in the estimates, especially in regions where intense deep convection occurs. The error is caused by unresolved horizontal variability in precipitation characteristics such as specific attenuation, rain rate, and effective reflectivity factor. This paper proposes the use of vertical decorrelation for correcting the nonuniform beam filling error developed under the assumption of a perfect vertical correlation. Empirical tests conducted using ground-based radar observations in the current simulation study show that decorrelation effects are evident in tilted convective cells. However, the problem of obtaining reasonable estimates of a governing parameter from the satellite data remains unresolved.
Bachman, Daniel; Chen, Zhijiang; Wang, Christopher; ...
2016-11-29
Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less
Minimum constitutive relation error based static identification of beams using force method
NASA Astrophysics Data System (ADS)
Guo, Jia; Takewaki, Izuru
2017-05-01
A new static identification approach based on the minimum constitutive relation error (CRE) principle for beam structures is introduced. The exact stiffness and the exact bending moment are shown to make the CRE minimal for given displacements to beam damages. A two-step substitution algorithm—a force-method step for the bending moment and a constitutive-relation step for the stiffness—is developed and its convergence is rigorously derived. Identifiability is further discussed and the stiffness in the undeformed region is found to be unidentifiable. An extra set of static measurements is complemented to remedy the drawback. Convergence and robustness are finally verified through numerical examples.
Cullen, Jared; Lobo, Charlene J; Ford, Michael J; Toth, Milos
2015-09-30
Electron-beam-induced deposition (EBID) is a direct-write chemical vapor deposition technique in which an electron beam is used for precursor dissociation. Here we show that Arrhenius analysis of the deposition rates of nanostructures grown by EBID can be used to deduce the diffusion energies and corresponding preexponential factors of EBID precursor molecules. We explain the limitations of this approach, define growth conditions needed to minimize errors, and explain why the errors increase systematically as EBID parameters diverge from ideal growth conditions. Under suitable deposition conditions, EBID can be used as a localized technique for analysis of adsorption barriers and prefactors.
Glaser, Adam K; Andreozzi, Jacqueline M; Zhang, Rongxiao; Pogue, Brian W; Gladstone, David J
2015-07-01
To test the use of a three-dimensional (3D) optical cone beam computed tomography reconstruction algorithm, for estimation of the imparted 3D dose distribution from megavoltage photon beams in a water tank for quality assurance, by imaging the induced Cherenkov-excited fluorescence (CEF). An intensified charge-coupled device coupled to a standard nontelecentric camera lens was used to tomographically acquire two-dimensional (2D) projection images of CEF from a complex multileaf collimator (MLC) shaped 6 MV linear accelerator x-ray photon beam operating at a dose rate of 600 MU/min. The resulting projections were used to reconstruct the 3D CEF light distribution, a potential surrogate of imparted dose, using a Feldkamp-Davis-Kress cone beam back reconstruction algorithm. Finally, the reconstructed light distributions were compared to the expected dose values from one-dimensional diode scans, 2D film measurements, and the 3D distribution generated from the clinical Varian ECLIPSE treatment planning system using a gamma index analysis. A Monte Carlo derived correction was applied to the Cherenkov reconstructions to account for beam hardening artifacts. 3D light volumes were successfully reconstructed over a 400 × 400 × 350 mm(3) volume at a resolution of 1 mm. The Cherenkov reconstructions showed agreement with all comparative methods and were also able to recover both inter- and intra-MLC leaf leakage. Based upon a 3%/3 mm criterion, the experimental Cherenkov light measurements showed an 83%-99% pass fraction depending on the chosen threshold dose. The results from this study demonstrate the use of optical cone beam computed tomography using CEF for the profiling of the imparted dose distribution from large area megavoltage photon beams in water.
Advanced Microwave Radiometer (AMR) for SWOT mission
NASA Astrophysics Data System (ADS)
Chae, C. S.
2015-12-01
The objective of the SWOT (Surface Water & Ocean Topography) satellite mission is to measure wide-swath, high resolution ocean topography and terrestrial surface waters. Since main payload radar will use interferometric SAR technology, conventional microwave radiometer system which has single nadir look antenna beam (i.e., OSTM/Jason-2 AMR) is not ideally applicable for the mission for wet tropospheric delay correction. Therefore, SWOT AMR incorporates two antenna beams along cross track direction. In addition to the cross track design of the AMR radiometer, wet tropospheric error requirement is expressed in space frequency domain (in the sense of cy/km), in other words, power spectral density (PSD). Thus, instrument error allocation and design are being done in PSD which are not conventional approaches for microwave radiometer requirement allocation and design. A few of novel analyses include: 1. The effects of antenna beam size to PSD error and land/ocean contamination, 2. Receiver error allocation and the contributions of radiometric count averaging, NEDT, Gain variation, etc. 3. Effect of thermal design in the frequency domain. In the presentation, detailed AMR design and analyses results will be discussed.
Surface treatment with linearly polarized laser beam at oblique incidence
NASA Astrophysics Data System (ADS)
Gutu, I.; Petre, C.; Mihailescu, I. N.; Taca, M.; Alexandrescu, E.; Ivanov, I.
2002-07-01
An effective method for surface heat treatment with 10.6 μm linear polarized laser beam at oblique incidence is reported. A circular focused laser spot on the workpiece surface, simultaneously with 2.2-4 times increasing of the absorption are obtained in the 70-80° range of the incidence angle. The main element of the experimental setup is the astigmatic focusing head which focalize the laser beam into an elliptical spot of ellipticity ɛ>3 at normal incidence. At a proper incidence angle (obtained by the focusing head tilting) the focused laser spot on the work piece surface gets a circular form and p-state of polarization is achieved. We performed laser heat treatment (transformation hardening, surface remelting) of the uncoated surface, as well as the alloying and cladding processes by powder injection. An enhancement of the processing efficiency was obtained; in this way the investment and operation costs for surface treatment with CO 2 laser can be significantly reduced. Several technical advantages concerning the pollution of the focusing optical components, powder jet flowing and reflected radiation by the work piece surface are obtained.
Interferometer for Measuring Displacement to Within 20 pm
NASA Technical Reports Server (NTRS)
Zhao, Feng
2003-01-01
An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers
Impact of spot charge inaccuracies in IMPT treatments.
Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M
2017-08-01
Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.
Measurement of bow tie profiles in CT scanners using a real-time dosimeter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Bruce R., E-mail: whitingbrucer@gmail.com; Evans, Joshua D.; Williamson, Jeffrey F.
2014-10-15
Purpose: Several areas of computed tomography (CT) research require knowledge about the intensity profile of the x-ray fan beam that is introduced by a bow tie filter. This information is considered proprietary by CT manufacturers, so noninvasive measurement methods are required. One method using real-time dosimeters has been proposed in the literature. A commercially available dosimeter was used to apply that method, and analysis techniques were developed to extract fan beam profiles from measurements. Methods: A real-time ion chamber was placed near the periphery of an empty CT gantry and the dose rate versus time waveform was recorded as themore » x-ray source rotated about the isocenter. In contrast to previously proposed analysis methods that assumed a pointlike detector, the finite-size ion chamber received varying amounts of coverage by the collimated x-ray beam during rotation, precluding a simple relationship between the source intensity as a function of fan beam angle and measured intensity. A two-parameter model for measurement intensity was developed that included both effective collimation width and source-to-detector distance, which then was iteratively solved to minimize the error between duplicate measurements at corresponding fan beam angles, allowing determination of the fan beam profile from measured dose-rate waveforms. Measurements were performed on five different scanner systems while varying parameters such as collimation, kVp, and bow tie filters. On one system, direct measurements of the bow tie profile were collected for comparison with the real-time dosimeter technique. Results: The data analysis method for a finite-size detector was found to produce a fan beam profile estimate with a relative error between duplicate measurement intensities of <5%. It was robust over a wide range of collimation widths (e.g., 1–40 mm), producing fan beam profiles that agreed with a relative error of 1%–5%. Comparison with a direct measurement technique on one system produced agreement with a relative error of 2%–6%. Fan beam profiles were found to differ for different filter types on a given system and between different vendors. Conclusions: A commercially available real-time dosimeter probe was found to be a convenient and accurate instrument for measuring fan beam profiles. An analysis method was developed that could handle a wide range of collimation widths by explicitly considering the finite width of the ion chamber. Relative errors in the profiles were found to be less than 5%. Measurements of five different clinical scanners demonstrate the variation in bow tie designs, indicating that generic bow tie models will not be adequate for CT system research.« less
Laser beam welding of new ultra-high strength and supra-ductile steels
NASA Astrophysics Data System (ADS)
Dahmen, Martin
2015-03-01
Ultra-high strength and supra-ductile are entering fields of new applications. Those materials are excellent candidates for modern light-weight construction and functional integration. As ultra-high strength steels the stainless martensitic grade 1.4034 and the bainitic steel UNS 53835 are investigated. For the supra-ductile steels stand two high austenitic steels with 18 and 28 % manganese. As there are no processing windows an approach from the metallurgical base on is required. Adjusting the weld microstructure the Q+P and the QT steels require weld heat treatment. The HSD steel is weldable without. Due to their applications the ultra-high strength steels are welded in as-rolled and strengthened condition. Also the reaction of the weld on hot stamping is reflected for the martensitic grades. The supra-ductile steels are welded as solution annealed and work hardened by 50%. The results show the general suitability for laser beam welding.
Development of Biological Acoustic Impedance Microscope and its Error Estimation
NASA Astrophysics Data System (ADS)
Hozumi, Naohiro; Nakano, Aiko; Terauchi, Satoshi; Nagao, Masayuki; Yoshida, Sachiko; Kobayashi, Kazuto; Yamamoto, Seiji; Saijo, Yoshifumi
This report deals with the scanning acoustic microscope for imaging cross sectional acoustic impedance of biological soft tissues. A focused acoustic beam was transmitted to the tissue object mounted on the "rear surface" of plastic substrate. A cerebellum tissue of rat and a reference material were observed at the same time under the same condition. As the incidence is not vertical, not only longitudinal wave but also transversal wave is generated in the substrate. The error in acoustic impedance assuming vertical incidence was estimated. It was proved that the error can precisely be compensated, if the beam pattern and acoustic parameters of coupling medium and substrate had been known.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
NASA Astrophysics Data System (ADS)
Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang
2013-11-01
Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.
SEU/SET Tolerant Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.
2010-01-01
The phase-locked loop (PLL) is an old and widely used circuit for frequency and phase demodulation, carrier and clock recovery, and frequency synthesis [1]. Its implementations range from discrete components to fully integrated circuits and even to firmware or software. Often the PLL is a highly critical component of a system, as for example when it is used to derive the on-chip clock, but as of this writing no definitive single-event upset (SET)/single-event transient (SET) tolerant PLL circuit has been described. This chapter hopes to rectify that situation, at least in regard to PLLs that are used to generate clocks. Older literature on fault-tolerant PLLs deals with detection of a hard failure, which is recovered by replacement, repair, or manual restart of discrete component systems. Several patents exist along these lines (6349391, 6272647, and 7089442). A newer approach is to harden the parts of a PLL system, to one degree or another, such as by using a voltage-based charge pump or a triple modular redundant (TMR) voted voltage-controlled oscillator (VCO). A more comprehensive approach is to harden by triplication and voting (TMR) all the digital pieces (primarily the divider) of a frequency synthesis PLL, but this still leaves room for errors in the VCO and the loop filter. Instead of hardening or voting pieces of a system, such as a frequency synthesis system (i.e., clock multiplier), we will show how the entire system can be voted. There are two main ways of doing this, each with advantages and drawbacks. We will show how each has advantages in certain areas, depending on the lock acquisition and tracking characteristics of the PLL. Because of this dependency on PLL characteristics, we will briefly revisit the theory of PLLs. But first we will describe the characteristics of voters and their correct application, as some literature does not follow the voting procedure that guarantees elimination of errors. Additionally, we will find that voting clocks is a bit trickier than voting data where an infallible clock is assumed. It is our job here to produce (or recover) that assumed infallible clock!
PARTICLE BEAM TRACKING CIRCUIT
Anderson, O.A.
1959-05-01
>A particle-beam tracking and correcting circuit is described. Beam induction electrodes are placed on either side of the beam, and potentials induced by the beam are compared in a voltage comparator or discriminator. This comparison produces an error signal which modifies the fm curve at the voltage applied to the drift tube, thereby returning the orbit to the preferred position. The arrangement serves also to synchronize accelerating frequency and magnetic field growth. (T.R.H.)
Experiment in Onboard Synthetic Aperture Radar Data Processing
NASA Technical Reports Server (NTRS)
Holland, Matthew
2011-01-01
Single event upsets (SEUs) are a threat to any computing system running on hardware that has not been physically radiation hardened. In addition to mandating the use of performance-limited, hardened heritage equipment, prior techniques for dealing with the SEU problem often involved hardware-based error detection and correction (EDAC). With limited computing resources, software- based EDAC, or any more elaborate recovery methods, were often not feasible. Synthetic aperture radars (SARs), when operated in the space environment, are interesting due to their relevance to NASAs objectives, but problematic in the sense of producing prodigious amounts of raw data. Prior implementations of the SAR data processing algorithm have been too slow, too computationally intensive, and require too much application memory for onboard execution to be a realistic option when using the type of heritage processing technology described above. This standard C-language implementation of SAR data processing is distributed over many cores of a Tilera Multicore Processor, and employs novel Radiation Hardening by Software (RHBS) techniques designed to protect the component processes (one per core) and their shared application memory from the sort of SEUs expected in the space environment. The source code includes calls to Tilera APIs, and a specialized Tilera compiler is required to produce a Tilera executable. The compiled application reads input data describing the position and orientation of a radar platform, as well as its radar-burst data, over time and writes out processed data in a form that is useful for analysis of the radar observations.
Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment
NASA Technical Reports Server (NTRS)
Orient, O. J.; Srivastava, S. K.
1985-01-01
A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.
NASA Astrophysics Data System (ADS)
Huang, Kuo-Ting; Chen, Hsi-Chao; Lin, Ssu-Fan; Lin, Ke-Ming; Syue, Hong-Ye
2012-09-01
While tin-doped indium oxide (ITO) has been extensively applied in flexible electronics, the problem of the residual stress has many obstacles to overcome. This study investigated the residual stress of flexible electronics by the double beam shadow moiré interferometer, and focused on the precision improvement with phase shifting interferometry (PSI). According to the out-of-plane displacement equation, the theoretical error depends on the grating pitch and the angle between incident light and CCD. The angle error could be reduced to 0.03% by the angle shift of 10° as a result of the double beam interferometer was a symmetrical system. But the experimental error of the double beam moiré interferometer still reached to 2.2% by the noise of the vibration and interferograms. In order to improve the measurement precision, PSI was introduced to the double shadow moiré interferometer. Wavefront phase was reconstructed by the five interferograms with the Hariharan algorithm. The measurement results of standard cylinder indicating the error could be reduced from 2.2% to less than 1% with PSI. The deformation of flexible electronic could be reconstructed fast and calculated the residual stress with the Stoney correction formula. This shadow moiré interferometer with PSI could improve the precision of residual stress for flexible electronics.
Estimation of the optical errors on the luminescence imaging of water for proton beam
NASA Astrophysics Data System (ADS)
Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi
2018-04-01
Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J; Hu, W; Xing, Y
Purpose: Different particle scanning beam delivery systems have different delivery accuracies. This study was performed to determine, for our particle treatment system, an appropriate composition (n=FWHM/GS) of spot size(FWHM) and grid size (GS), which can provide homogenous delivered dose distributions for both proton and heavy ion scanning beam radiotherapy. Methods: We analyzed the delivery errors of our beam delivery system using log files from the treatment of 28 patients. We used a homemade program to simulate square fields for different n values with and without considering the delivery errors and analyzed the homogeneity. All spots were located on a rectilinearmore » grid with equal spacing in the × and y directions. After that, we selected 7 energy levels for both proton and carbon ions. For each energy level, we made 6 square field plans with different n values (1, 1.5, 2, 2.5, 3, 3.5). Then we delivered those plans and used films to measure the homogeneity of each field. Results: For program simulation without delivery errors, when n≥1.1 the homogeneity can be within ±3%. For both proton and carbon program simulations with delivery errors and film measurements, the homogeneity can be within ±3% when n≥2.5. Conclusion: For our facility with system errors, the n≥2.5 is appropriate for maintaining homogeneity within ±3%.« less
Synthetic Hounsfield units from spectral CT data
NASA Astrophysics Data System (ADS)
Bornefalk, Hans
2012-04-01
Beam-hardening-free synthetic images with absolute CT numbers that radiologists are used to can be constructed from spectral CT data by forming ‘dichromatic’ images after basis decomposition. The CT numbers are accurate for all tissues and the method does not require additional reconstruction. This method prevents radiologists from having to relearn new rules-of-thumb regarding absolute CT numbers for various organs and conditions as conventional CT is replaced by spectral CT. Displaying the synthetic Hounsfield unit images side-by-side with images reconstructed for optimal detectability for a certain task can ease the transition from conventional to spectral CT.
The advances and characteristics of high-power diode laser materials processing
NASA Astrophysics Data System (ADS)
Li, Lin
2000-10-01
This paper presents a review of the direct applications of high-power diode lasers for materials processing including soldering, surface modification (hardening, cladding, glazing and wetting modifications), welding, scribing, sheet metal bending, marking, engraving, paint stripping, powder sintering, synthesis, brazing and machining. The specific advantages and disadvantages of diode laser materials processing are compared with CO 2, Nd:YAG and excimer lasers. An effort is made to identify the fundamental differences in their beam/material interaction characteristics and materials behaviour. Also an appraisal of the future prospects of the high-power diode lasers for materials processing is given.
Identification of nonlinear normal modes of engineering structures under broadband forcing
NASA Astrophysics Data System (ADS)
Noël, Jean-Philippe; Renson, L.; Grappasonni, C.; Kerschen, G.
2016-06-01
The objective of the present paper is to develop a two-step methodology integrating system identification and numerical continuation for the experimental extraction of nonlinear normal modes (NNMs) under broadband forcing. The first step processes acquired input and output data to derive an experimental state-space model of the structure. The second step converts this state-space model into a model in modal space from which NNMs are computed using shooting and pseudo-arclength continuation. The method is demonstrated using noisy synthetic data simulated on a cantilever beam with a hardening-softening nonlinearity at its free end.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lessard, Francois; Archambault, Louis; Plamondon, Mathieu
Purpose: Photon dosimetry in the kilovolt (kV) energy range represents a major challenge for diagnostic and interventional radiology and superficial therapy. Plastic scintillation detectors (PSDs) are potentially good candidates for this task. This study proposes a simple way to obtain accurate correction factors to compensate for the response of PSDs to photon energies between 80 and 150 kVp. The performance of PSDs is also investigated to determine their potential usefulness in the diagnostic energy range. Methods: A 1-mm-diameter, 10-mm-long PSD was irradiated by a Therapax SXT 150 unit using five different beam qualities made of tube potentials ranging from 80more » to 150 kVp and filtration thickness ranging from 0.8 to 0.2 mmAl + 1.0 mmCu. The light emitted by the detector was collected using an 8-m-long optical fiber and a polychromatic photodiode, which converted the scintillation photons to an electrical current. The PSD response was compared with the reference free air dose rate measured with a calibrated Farmer NE2571 ionization chamber. PSD measurements were corrected using spectra-weighted corrections, accounting for mass energy-absorption coefficient differences between the sensitive volumes of the ionization chamber and the PSD, as suggested by large cavity theory (LCT). Beam spectra were obtained from x-ray simulation software and validated experimentally using a CdTe spectrometer. Correction factors were also obtained using Monte Carlo (MC) simulations. Percent depth dose (PDD) measurements were compensated for beam hardening using the LCT correction method. These PDD measurements were compared with uncorrected PSD data, PDD measurements obtained using Gafchromic films, Monte Carlo simulations, and previous data. Results: For each beam quality used, the authors observed an increase of the energy response with effective energy when no correction was applied to the PSD response. Using the LCT correction, the PSD response was almost energy independent, with a residual 2.1% coefficient of variation (COV) over the 80-150-kVp energy range. Monte Carlo corrections reduced the COV to 1.4% over this energy range. All PDD measurements were in good agreement with one another except for the uncorrected PSD data, in which an over-response was observed with depth (13% at 10 cm with a 100 kVp beam), showing that beam hardening had a non-negligible effect on the PSD response. A correction based on LCT compensated very well for this effect, reducing the over-response to 3%.Conclusion: In the diagnostic energy range, PSDs show high-energy dependence, which can be corrected using spectra-weighted mass energy-absorption coefficients, showing no considerable sign of quenching between these energies. Correction factors obtained by Monte Carlo simulations confirm that the approximations made by LCT corrections are valid. Thus, PSDs could be useful for real-time dosimetry in radiology applications.« less
NASA Astrophysics Data System (ADS)
Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.
2017-08-01
Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.
System-on-Chip Data Processing and Data Handling Spaceflight Electronics
NASA Technical Reports Server (NTRS)
Kleyner, I.; Katz, R.; Tiggeler, H.
1999-01-01
This paper presents a methodology and a tool set which implements automated generation of moderate-size blocks of customized intellectual property (IP), thus effectively reusing prior work and minimizing the labor intensive, error-prone parts of the design process. Customization of components allows for optimization for smaller area and lower power consumption, which is an important factor given the limitations of resources available in radiation-hardened devices. The effects of variations in HDL coding style on the efficiency of synthesized code for various commercial synthesis tools are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jani, S; Low, D; Lamb, J
2015-06-15
Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments weremore » simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and automatically detected using 3D setup images of two imaging modalities across three commonly-treated anatomical sites.« less
High Energy Rate Forming Induced Phase Transition in Austenitic Steel
NASA Astrophysics Data System (ADS)
Kovacs, T.; Kuzsella, L.
2017-02-01
In this study, the effects of explosion hardening on the microstructure and the hardness of austenitic stainless steel have been studied. The optimum explosion hardening technology of austenitic stainless steel was researched. In case of the explosive hardening used new idea means indirect hardening setup. Austenitic stainless steels have high plasticity and can be cold formed easily. However, during cold processing the hardening phenomena always occurs. Upon the explosion impact, the deformation mechanism indicates a plastic deformation and this deformation induces a phase transformation (martensite). The explosion hardening enhances the mechanical properties of the material, includes the wear resistance and hardness [1]. In case of indirect hardening as function of the setup parameters specifically the flayer plate position the hardening increased differently. It was find a relationship between the explosion hardening setup and the hardening level.
Microstructure and mechanical properties of FeCrAl alloys under heavy ion irradiations
NASA Astrophysics Data System (ADS)
Aydogan, E.; Weaver, J. S.; Maloy, S. A.; El-Atwani, O.; Wang, Y. Q.; Mara, N. A.
2018-05-01
FeCrAl ferritic alloys are excellent cladding candidates for accident tolerant fuel systems due to their high resistance to oxidation as a result of formation of a protective Al2O3 scale at high temperatures in steam. In this study, we report the irradiation response of the 10Cr and 13Cr FeCrAl cladding tubes under Fe2+ ion irradiation up to ∼16 dpa at 300 °C. Dislocation loop size, density and characteristics were determined using both two-beam bright field transmission electron microscopy and on-zone scanning transmission electron microscopy techniques. 10Cr (C06M2) tube has a lower dislocation density, larger grain size and a slightly weaker texture compared to the 13Cr (C36M3) tube before irradiation. After irradiation to 0.7 dpa and 16 dpa, the fraction of <100> type sessile dislocations decreases with increasing Cr amount in the alloys. It has been found that there is neither void formation nor α‧ precipitation as a result of ion irradiations in either alloy. Therefore, dislocation loops were determined to be the only irradiation induced defects contributing to the hardening. Nanoindentation testing before the irradiation revealed that the average nanohardness of the C36M3 tube is higher than that of the C06M2 tube. The average nanohardness of irradiated tube samples saturated at 1.6-2.0 GPa hardening for both tubes between ∼3.4 dpa and ∼16 dpa. The hardening calculated based on transmission electron microscopy was found to be consistent with nanohardness measurements.
Microstructure and mechanical properties of FeCrAl alloys under heavy ion irradiations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, E.; Weaver, J. S.; Maloy, S. A.
FeCrAl ferritic alloys are excellent cladding candidates for accident tolerant fuel systems due to their high resistance to oxidation as a result of formation of a protective Al 2O 3 scale at high temperatures in steam. In this study, we report the irradiation response of the 10Cr and 13Cr FeCrAl cladding tubes under Fe 2+ ion irradiation up to ~16 dpa at 300 °C. Dislocation loop size, density and characteristics were determined using both two beam bright field transmission electron microscopy and on-zone scanning transmission electron microscopy techniques. 10Cr (C06M2) tube has a lower dislocation density, larger grain size andmore » a slightly weaker texture compared to the 13Cr (C36M3) tube before irradiation. After irradiation to 0.7 dpa and 16 dpa, the fraction of <100> type sessile dislocations decreases with increasing Cr amount in the alloys. It has been found that there is neither void formation nor α' precipitation as a result of ion irradiations in either alloy. Therefore, dislocation loops were determined to be the only irradiation induced defects contributing to the hardening. Nanoindentation testing before the irradiation revealed that the average nanohardness of the C36M3 tube is higher than that of the C06M2 tube. The average nanohardness of irradiated tube samples saturated at 1.6-2.0 GPa hardening for both tubes between ~3.4 dpa and ~16 dpa. The hardening calculated based on transmission electron microscopy was found to be consistent with nanohardness measurements.« less
Microstructure and mechanical properties of FeCrAl alloys under heavy ion irradiations
Aydogan, E.; Weaver, J. S.; Maloy, S. A.; ...
2018-03-02
FeCrAl ferritic alloys are excellent cladding candidates for accident tolerant fuel systems due to their high resistance to oxidation as a result of formation of a protective Al 2O 3 scale at high temperatures in steam. In this study, we report the irradiation response of the 10Cr and 13Cr FeCrAl cladding tubes under Fe 2+ ion irradiation up to ~16 dpa at 300 °C. Dislocation loop size, density and characteristics were determined using both two beam bright field transmission electron microscopy and on-zone scanning transmission electron microscopy techniques. 10Cr (C06M2) tube has a lower dislocation density, larger grain size andmore » a slightly weaker texture compared to the 13Cr (C36M3) tube before irradiation. After irradiation to 0.7 dpa and 16 dpa, the fraction of <100> type sessile dislocations decreases with increasing Cr amount in the alloys. It has been found that there is neither void formation nor α' precipitation as a result of ion irradiations in either alloy. Therefore, dislocation loops were determined to be the only irradiation induced defects contributing to the hardening. Nanoindentation testing before the irradiation revealed that the average nanohardness of the C36M3 tube is higher than that of the C06M2 tube. The average nanohardness of irradiated tube samples saturated at 1.6-2.0 GPa hardening for both tubes between ~3.4 dpa and ~16 dpa. The hardening calculated based on transmission electron microscopy was found to be consistent with nanohardness measurements.« less
Jones, Kevin C; Seghal, Chandra M; Avery, Stephen
2016-03-21
The unique dose deposition of proton beams generates a distinctive thermoacoustic (protoacoustic) signal, which can be used to calculate the proton range. To identify the expected protoacoustic amplitude, frequency, and arrival time for different proton pulse characteristics encountered at hospital-based proton sources, the protoacoustic pressure emissions generated by 150 MeV, pencil-beam proton pulses were simulated in a homogeneous water medium. Proton pulses with Gaussian widths ranging up to 200 μs were considered. The protoacoustic amplitude, frequency, and time-of-flight (TOF) range accuracy were assessed. For TOF calculations, the acoustic pulse arrival time was determined based on multiple features of the wave. Based on the simulations, Gaussian proton pulses can be categorized as Dirac-delta-function-like (FWHM < 4 μs) and longer. For the δ-function-like irradiation, the protoacoustic spectrum peaks at 44.5 kHz and the systematic error in determining the Bragg peak range is <2.6 mm. For longer proton pulses, the spectrum shifts to lower frequencies, and the range calculation systematic error increases (⩽ 23 mm for FWHM of 56 μs). By mapping the protoacoustic peak arrival time to range with simulations, the residual error can be reduced. Using a proton pulse with FWHM = 2 μs results in a maximum signal-to-noise ratio per total dose. Simulations predict that a 300 nA, 150 MeV, FWHM = 4 μs Gaussian proton pulse (8.0 × 10(6) protons, 3.1 cGy dose at the Bragg peak) will generate a 146 mPa pressure wave at 5 cm beyond the Bragg peak. There is an angle dependent systematic error in the protoacoustic TOF range calculations. Placing detectors along the proton beam axis and beyond the Bragg peak minimizes this error. For clinical proton beams, protoacoustic detectors should be sensitive to <400 kHz (for -20 dB). Hospital-based synchrocyclotrons and cyclotrons are promising sources of proton pulses for generating clinically measurable protoacoustic emissions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J; Shi, W; Andrews, D
2016-06-15
Purpose: To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac x-ray imaging systems for cranial radiotherapy. Method: Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (Version 2.5), which is integrated with a BrainLab ExacTrac imaging system (Version 6.1.1). The phantom study was based on a Rando head phantom, which was designed to evaluate isocenter-location dependence of the image registrations. Ten isocenters were selected at various locations in the phantom, which represented clinical treatment sites. CBCT and ExacTrac x-ray images were taken when the phantom was located at each isocenter. The patientmore » study included thirteen patients. CBCT and ExacTrac x-ray images were taken at each patient’s treatment position. Six-dimensional image registrations were performed on CBCT and ExacTrac, and residual errors calculated from CBCT and ExacTrac were compared. Results: In the phantom study, the average residual-error differences between CBCT and ExacTrac image registrations were: 0.16±0.10 mm, 0.35±0.20 mm, and 0.21±0.15 mm, in the vertical, longitudinal, and lateral directions, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.36±0.11 degree, 0.14±0.10 degree, and 0.12±0.10 degree, respectively. In the patient study, the average residual-error differences in the vertical, longitudinal, and lateral directions were: 0.13±0.13 mm, 0.37±0.21 mm, 0.22±0.17 mm, respectively. The average residual-error differences in the rotation, roll, and pitch were: 0.30±0.10 degree, 0.18±0.11 degree, and 0.22±0.13 degree, respectively. Larger residual-error differences (up to 0.79 mm) were observed in the longitudinal direction in the phantom and patient studies where isocenters were located in or close to frontal lobes, i.e., located superficially. Conclusion: Overall, the average residual-error differences were within 0.4 mm in the translational directions and were within 0.4 degree in the rotational directions.« less
High-fidelity artifact correction for cone-beam CT imaging of the brain
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-02-01
CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Z.; Ruland, R.; Dix, B.
The Stanford Linear Accelerator Center is evaluating the feasibility of placing a free electron laser (FEL) at the end of the linear accelerator. The proposal is to inject electrons two thirds of the way down the linac, accelerate the electrons for the last one third of the linac, and then send the electrons into the FEL. This project is known as the LCLS (Linac Coherent Light Source). To test the feasibility of the LCLS, a smaller experiment VISA (Visual to Infrared SASE (Self Amplified Stimulated Emission) Amplifier) is being performed at Brookhaven National Laboratory. VISA consists of four wiggler segments,more » each 0.99 m long. The four segments are required to be aligned to the beam axis with an rms error less than 50 {micro}m [1]. This very demanding alignment is carried out in two steps [2]. First the segments are fiducialized using a pulsed wire system. Then the wiggler segments are placed along a reference laser beam which coincides with the electron beam axis. In the wiggler segment fiducialization, a wire is stretched through a wiggler segment and a current pulse is sent down the wire. The deflection of the wire is monitored. The deflection gives information about the electron beam trajectory. The wire is moved until its x position, the coordinate without wire sag, is on the ideal beam trajectory. (The y position is obtained by rotating the wiggler 90{sup o}.) Once the wire is on the ideal beam trajectory, the wire's location is measured relative to tooling balls on the wiggler segment. To locate the wire, a device was constructed which measures the wire position relative to tooling balls on the device. The device is called the wire finder. It will be discussed in this paper. To place the magnets along the reference laser beam, the position of the laser beam must be determined. A device which can locate the laser beam relative to tooling balls was constructed and is also discussed in this paper. This device is called the laser finder. With a total alignment error budget less than 50 {micro}m, both the fiducialization and magnet placement must be performed with errors much smaller than 50 {micro}m. It is desired to keep the errors from the wire finder and laser finder at the few {micro}m level.« less
NASA Technical Reports Server (NTRS)
Kaufmann, D. C.
1976-01-01
The fine frequency setting of a cesium beam frequency standard is accomplished by adjusting the C field control with the appropriate Zeeman frequency applied to the harmonic generator. A novice operator in the field, even when using the correct Zeeman frequency input, may mistakenly set the C field to any one of seven major Beam I peaks (fingers) represented by the Ramsey curve. This can result in frequency offset errors of as much as 2.5 parts in ten to the tenth. The effects of maladjustment are demonstrated and suggestions are discussed on how to avoid the subtle traps associated with C field adjustments.
NASA Astrophysics Data System (ADS)
Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang
2018-02-01
This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.
Focussed Ion Beam Milling and Scanning Electron Microscopy of Brain Tissue
Knott, Graham; Rosset, Stéphanie; Cantoni, Marco
2011-01-01
This protocol describes how biological samples, like brain tissue, can be imaged in three dimensions using the focussed ion beam/scanning electron microscope (FIB/SEM). The samples are fixed with aldehydes, heavy metal stained using osmium tetroxide and uranyl acetate. They are then dehydrated with alcohol and infiltrated with resin, which is then hardened. Using a light microscope and ultramicrotome with glass knives, a small block containing the region interest close to the surface is made. The block is then placed inside the FIB/SEM, and the ion beam used to roughly mill a vertical face along one side of the block, close to this region. Using backscattered electrons to image the underlying structures, a smaller face is then milled with a finer ion beam and the surface scrutinised more closely to determine the exact area of the face to be imaged and milled. The parameters of the microscope are then set so that the face is repeatedly milled and imaged so that serial images are collected through a volume of the block. The image stack will typically contain isotropic voxels with dimenions as small a 4 nm in each direction. This image quality in any imaging plane enables the user to analyse cell ultrastructure at any viewing angle within the image stack. PMID:21775953
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paziresh, M.; Kingston, A. M., E-mail: andrew.kingston@anu.edu.au; Latham, S. J.
Dual-energy computed tomography and the Alvarez and Macovski [Phys. Med. Biol. 21, 733 (1976)] transmitted intensity (AMTI) model were used in this study to estimate the maps of density (ρ) and atomic number (Z) of mineralogical samples. In this method, the attenuation coefficients are represented [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)] in the form of the two most important interactions of X-rays with atoms that is, photoelectric absorption (PE) and Compton scattering (CS). This enables material discrimination as PE and CS are, respectively, dependent on the atomic number (Z) and density (ρ) of materials [Alvarez and Macovski,more » Phys. Med. Biol. 21, 733 (1976)]. Dual-energy imaging is able to identify sample materials even if the materials have similar attenuation coefficients at single-energy spectrum. We use the full model rather than applying one of several applied simplified forms [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976); Siddiqui et al., SPE Annual Technical Conference and Exhibition (Society of Petroleum Engineers, 2004); Derzhi, U.S. patent application 13/527,660 (2012); Heismann et al., J. Appl. Phys. 94, 2073–2079 (2003); Park and Kim, J. Korean Phys. Soc. 59, 2709 (2011); Abudurexiti et al., Radiol. Phys. Technol. 3, 127–135 (2010); and Kaewkhao et al., J. Quant. Spectrosc. Radiat. Transfer 109, 1260–1265 (2008)]. This paper describes the tomographic reconstruction of ρ and Z maps of mineralogical samples using the AMTI model. The full model requires precise knowledge of the X-ray energy spectra and calibration of PE and CS constants and exponents of atomic number and energy that were estimated based on fits to simulations and calibration measurements. The estimated ρ and Z images of the samples used in this paper yield average relative errors of 2.62% and 1.19% and maximum relative errors of 2.64% and 7.85%, respectively. Furthermore, we demonstrate that the method accounts for the beam hardening effect in density (ρ) and atomic number (Z) reconstructions to a significant extent.« less
Influence of Cooling Condition on the Performance of Grinding Hardened Layer in Grind-hardening
NASA Astrophysics Data System (ADS)
Wang, G. C.; Chen, J.; Xu, G. Y.; Li, X.
2018-02-01
45# steel was grinded and hardened on a surface grinding machine to study the effect of three different cooling media, including emulsion, dry air and liquid nitrogen, on the microstructure and properties of the hardened layer. The results show that the microstructure of material surface hardened with emulsion is pearlite and no hardened layer. The surface roughness is small and the residual stress is compressive stress. With cooling condition of liquid nitrogen and dry air, the specimen surface are hardened, the organization is martensite, the surface roughness is also not changed, but high hardness of hardened layer and surface compressive stress were obtained when grinding using liquid nitrogen. The deeper hardened layer grinded with dry air was obtained and surface residual stress is tensile stress. This study provides an experimental basis for choosing the appropriate cooling mode to effectively control the performance of grinding hardened layer.
Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui
2010-10-01
A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.
So, Aaron; Imai, Yasuhiro; Nett, Brian; Jackson, John; Nett, Liz; Hsieh, Jiang; Wisenberg, Gerald; Teefy, Patrick; Yadegari, Andrew; Islam, Ali; Lee, Ting-Yim
2016-08-01
The authors investigated the performance of a recently introduced 160-mm/256-row CT system for low dose quantitative myocardial perfusion (MP) imaging of the whole heart. This platform is equipped with a gantry capable of rotating at 280 ms per full cycle, a second generation of adaptive statistical iterative reconstruction (ASiR-V) to correct for image noise arising from low tube voltage potential/tube current dynamic scanning, and image reconstruction algorithms to tackle beam-hardening, cone-beam, and partial-scan effects. Phantom studies were performed to investigate the effectiveness of image noise and artifact reduction with a GE Healthcare Revolution CT system for three acquisition protocols used in quantitative CT MP imaging: 100, 120, and 140 kVp/25 mAs. The heart chambers of an anthropomorphic chest phantom were filled with iodinated contrast solution at different concentrations (contrast levels) to simulate the circulation of contrast through the heart in quantitative CT MP imaging. To evaluate beam-hardening correction, the phantom was scanned at each contrast level to measure the changes in CT number (in Hounsfield unit or HU) in the water-filled region surrounding the heart chambers with respect to baseline. To evaluate cone-beam artifact correction, differences in mean water HU between the central and peripheral slices were compared. Partial-scan artifact correction was evaluated from the fluctuation of mean water HU in successive partial scans. To evaluate image noise reduction, a small hollow region adjacent to the heart chambers was filled with diluted contrast, and contrast-to-noise ratio in the region before and after noise correction with ASiR-V was compared. The quality of MP maps acquired with the CT system was also evaluated in porcine CT MP studies. Myocardial infarct was induced in a farm pig from a transient occlusion of the distal left anterior descending (LAD) artery with a catheter-based interventional procedure. MP maps were generated from the dynamic contrast-enhanced (DCE) heart images taken at baseline and three weeks after the ischemic insult. Their results showed that the phantom and animal images acquired with the CT platform were minimally affected by image noise and artifacts. For the beam-hardening phantom study, changes in water HU in the wall surrounding the heart chambers greatly reduced from >±30 to ≤ ± 5 HU at all kVp settings except one region at 100 kVp (7 HU). For the cone-beam phantom study, differences in mean water HU from the central slice were less than 5 HU at two peripheral slices with each 4 cm away from the central slice. These findings were reproducible in the pig DCE images at two peripheral slices that were 6 cm away from the central slice. For the partial-scan phantom study, standard deviations of the mean water HU in 10 successive partial scans were less than 5 HU at the central slice. Similar observations were made in the pig DCE images at two peripheral slices with each 6 cm away from the central slice. For the image noise phantom study, CNRs in the ASiR-V images were statistically higher (p < 0.05) than the non-ASiR-V images at all kVp settings. MP maps generated from the porcine DCE images were in excellent quality, with the ischemia in the LAD territory clearly seen in the three orthogonal views. The study demonstrates that this CT system can provide accurate and reproducible CT numbers during cardiac gated acquisitions across a wide axial field of view. This CT number fidelity will enable this imaging tool to assess contrast enhancement, potentially providing valuable added information beyond anatomic evaluation of coronary stenoses. Furthermore, their results collectively suggested that the 100 kVp/25 mAs protocol run on this CT system provides sufficient image accuracy at a low radiation dose (<3 mSv) for whole-heart quantitative CT MP imaging.
Simultaneous phase-shifting interferometry study based on the common-path Fizeau interferometer
NASA Astrophysics Data System (ADS)
Liu, Feng-wei; Wu, Yong-qian
2014-09-01
A simultaneous phase-shifting interferometry(SPSI) based on the common-path Fizeau interferometer has been discussed.In this system,two orthogonal polarized beams, using as the reference beam and test beam ,are detached by a particular Wollaston prism at a very small angle,then four equal sub-beams are achieved by a combination of three non-polarizing beam splitters(NPBS),and the phase shifts are introduced by four polarizers whose polarization azimuths are 0°, 45°, 90°, 135° with the horizontal direction respectively,the four phase shift interferograms are collected simultaneously by controlling the CCDs working at the same time .The SPSI principle is studied at first,then is the error analysis, finally we emulate the process of surface recovery by four steps phase shifts algorithm,the results indicate that, to ensure the feasibility of the SPSI system, we have to control the polarization azimuth error of the polarizer in +/- 0.5°.
Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter
NASA Astrophysics Data System (ADS)
Imig, Astrid; Stephenson, Edward
2009-10-01
The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.
Yan, M; Lovelock, D; Hunt, M; Mechalakos, J; Hu, Y; Pham, H; Jackson, A
2013-12-01
To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or -0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1-2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39-16.8) cGy, or 10.1 (0.8-32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%-9.06%) and 10.2% (0.7%-63.6%), respectively. Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%.
Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.
2013-01-01
Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%. PMID:24320510
Properties and Commercial Application of Manual Plasma Hardening
NASA Astrophysics Data System (ADS)
Korotkov, V. A.
2016-11-01
A new method and a device for plasma hardening of various parts are considered. Installation of the new device does not require too much investment (the active mechanical productions are appropriate for its accommodation) and special choice of personnel (welders train to use it without difficulty). Plasma hardening does not deform and worsen the smoothness of the surface, which makes it possible to employ many hardened parts without finishing mechanical treatment required after bulk or induction hardening. The hardened layer (about 1 mm) produced by plasma hardening exhibits better wear resistance than after bulk hardening with tempering, which prolongs the service life of the parts.
NASA Astrophysics Data System (ADS)
Druzhinina, A. A.; Laptenok, V. D.; Murygin, A. V.; Laptenok, P. V.
2016-11-01
Positioning along the joint during the electron beam welding is a difficult scientific and technical problem to achieve the high quality of welds. The final solution of this problem is not found. This is caused by weak interference protection of sensors of the joint position directly in the welding process. Frequently during the electron beam welding magnetic fields deflect the electron beam from the optical axis of the electron beam gun. The collimated X-ray sensor is used to monitor the beam deflection caused by the action of magnetic fields. Signal of X-ray sensor is processed by the method of synchronous detection. Analysis of spectral characteristics of the X-ray sensor showed that the displacement of the joint from the optical axis of the gun affects on the output signal of sensor. The authors propose dual-circuit system for automatic positioning of the electron beam on the joint during the electron beam welding in conditions of action of magnetic interference. This system includes a contour of joint tracking and contour of compensation of magnetic fields. The proposed system is stable. Calculation of dynamic error of system showed that error of positioning does not exceed permissible deviation of the electron beam from the joint plane.
A new multiple air beam approach for in-process form error optical measurement
NASA Astrophysics Data System (ADS)
Gao, Y.; Li, R.
2018-07-01
In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.
Refractive optics to compensate x-ray mirror shape-errors
NASA Astrophysics Data System (ADS)
Laundy, David; Sawhney, Kawal; Dhamgaye, Vishal; Pape, Ian
2017-08-01
Elliptically profiled mirrors operating at glancing angle are frequently used at X-ray synchrotron sources to focus X-rays into sub-micrometer sized spots. Mirror figure error, defined as the height difference function between the actual mirror surface and the ideal elliptical profile, causes a perturbation of the X-ray wavefront for X- rays reflecting from the mirror. This perturbation, when propagated to the focal plane results in an increase in the size of the focused beam. At Diamond Light Source we are developing refractive optics that can be used to locally cancel out the wavefront distortion caused by figure error from nano-focusing elliptical mirrors. These optics could be used to correct existing optical components on synchrotron radiation beamlines in order to give focused X-ray beam sizes approaching the theoretical diffraction limit. We present our latest results showing measurement of the X-ray wavefront error after reflection from X-ray mirrors and the translation of the measured wavefront into a design for refractive optical elements for correction of the X-ray wavefront. We show measurement of the focused beam with and without the corrective optics inserted showing reduction in the size of the focus resulting from the correction to the wavefront.
Momentum Flux Determination Using the Multi-beam Poker Flat Incoherent Scatter Radar
NASA Technical Reports Server (NTRS)
Nicolls, M. J.; Fritts, D. C.; Janches, Diego; Heinselman, C. J.
2012-01-01
In this paper, we develop an estimator for the vertical flux of horizontal momentum with arbitrary beam pointing, applicable to the case of arbitrary but fixed beam pointing with systems such as the Poker Flat Incoherent Scatter Radar (PFISR). This method uses information from all available beams to resolve the variances of the wind field in addition to the vertical flux of both meridional and zonal momentum, targeted for high-frequency wave motions. The estimator utilises the full covariance of the distributed measurements, which provides a significant reduction in errors over the direct extension of previously developed techniques and allows for the calculation of an error covariance matrix of the estimated quantities. We find that for the PFISR experiment, we can construct an unbiased and robust estimator of the momentum flux if sufficient and proper beam orientations are chosen, which can in the future be optimized for the expected frequency distribution of momentum-containing scales. However, there is a potential trade-off between biases and standard errors introduced with the new approach, which must be taken into account when assessing the momentum fluxes. We apply the estimator to PFISR measurements on 23 April 2008 and 21 December 2007, from 60-85 km altitude, and show expected results as compared to mean winds and in relation to the measured vertical velocity variances.
Algorithm for ion beam figuring of low-gradient mirrors.
Jiao, Changjun; Li, Shengyi; Xie, Xuhui
2009-07-20
Ion beam figuring technology for low-gradient mirrors is discussed. Ion beam figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target workpiece to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional contact polishing processes, are avoided. Based on the Bayesian principle, an iterative dwell time algorithm for planar mirrors is deduced from the computer-controlled optical surfacing (CCOS) principle. With the properties of the removal function, the shaping process of low-gradient mirrors can be approximated by the linear model for planar mirrors. With these discussions, the error surface figuring technology for low-gradient mirrors with a linear path is set up. With the near-Gaussian property of the removal function, the figuring process with a spiral path can be described by the conventional linear CCOS principle, and a Bayesian-based iterative algorithm can be used to deconvolute the dwell time. Moreover, the selection criterion of the spiral parameter is given. Ion beam figuring technology with a spiral scan path based on these methods can be used to figure mirrors with non-axis-symmetrical errors. Experiments on SiC chemical vapor deposition planar and Zerodur paraboloid samples are made, and the final surface errors are all below 1/100 lambda.
NASA Astrophysics Data System (ADS)
Jeong, S. W.; Kang, U. G.; Choi, J. Y.; Nam, W. J.
2012-09-01
Strain aging and hardening behaviors of a 304 stainless steel containing deformation-induced martensite were investigated by examining mechanical properties and microstructural evolution for different aging temperature and time. Introduced age hardening mechanisms of a cold rolled 304 stainless steel were the additional formation of α'-martensite, hardening of α'-martensite, and hardening of deformed austenite. The increased amount of α'-martensite at an aging temperature of 450 °C confirmed the additional formation of α'-martensite as a hardening mechanism in a cold rolled 304 stainless steel. Additionally, the increased hardness in both α'-martensite and austenite phases with aging temperature proved that hardening of both α'-martensite and austenite phases would be effective as hardening mechanisms in cold rolled and aged 304 stainless steels. The results suggested that among hardening mechanisms, hardening of an α'-martensite phase, including the diffusion of interstitial solute carbon atoms to dislocations and the precipitation of fine carbide particles would become a major hardening mechanism during aging of cold rolled 304 stainless steels.
Calibration free beam hardening correction for cardiac CT perfusion imaging
NASA Astrophysics Data System (ADS)
Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.
2016-03-01
Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.
Park, Eun-Ah; Lee, Whal; Chung, Se-Young; Yin, Yong Hu; Chung, Jin Wook; Park, Jae Hyung
2010-01-01
To determine the optimal scan timing and adequate intravenous route for patients having undergone the Fontan operation. A total of 88 computed tomographic images in 49 consecutive patients who underwent the Fontan operation were retrospectively evaluated and divided into 7 groups: group 1, bolus-tracking method with either intravenous route (n = 20); group 2, 1-minute-delay scan with single antecubital route (n = 36); group 3, 1-minute-delay scan with both antecubital routes (n = 2); group 4, 1-minute-delay scan with foot vein route (n = 3); group 5, 1-minute-delay scan with simultaneous infusion via both antecubital and foot vein routes (n = 2); group 6, 3-minute-delay scan with single antecubital route (n = 22); and group 7, 3-minute-delay scan with foot vein route (n = 3). The presence of beam-hardening artifact, uniform enhancement, and optimal enhancement was evaluated at the right pulmonary artery (RPA), left pulmonary artery (LPA), and Fontan tract. Optimal enhancement was determined when evaluation of thrombus was possible. Standard deviation was measured at the RPA, LPA, and Fontan tract. Beam-hardening artifacts of the RPA, LPA, and Fontan tract were frequently present in groups 1, 4, and 5. The success rate of uniform and optimal enhancement was highest (100%) in groups 6 and 7, followed by group 2 (75%). An SD of less than 30 Hounsfield unit for the pulmonary artery and Fontan tract was found in groups 3, 6, and 7. The optimal enhancement of the pulmonary arteries and Fontan tract can be achieved by a 3-minute-delay scan irrespective of the intravenous route location.
Kalra, Mannudeep K; Maher, Michael M; Blake, Michael A; Lucey, Brian C; Karau, Kelly; Toth, Thomas L; Avinash, Gopal; Halpern, Elkan F; Saini, Sanjay
2004-09-01
To assess the effect of noise reduction filters on detection and characterization of lesions on low-radiation-dose abdominal computed tomographic (CT) images. Low-dose CT images of abdominal lesions in 19 consecutive patients (11 women, eight men; age range, 32-78 years) were obtained at reduced tube currents (120-144 mAs). These baseline low-dose CT images were postprocessed with six noise reduction filters; the resulting postprocessed images were then randomly assorted with baseline images. Three radiologists performed independent evaluation of randomized images for presence, number, margins, attenuation, conspicuity, calcification, and enhancement of lesions, as well as image noise. Side-by-side comparison of baseline images with postprocessed images was performed by using a five-point scale for assessing lesion conspicuity and margins, image noise, beam hardening, and diagnostic acceptability. Quantitative noise and contrast-to-noise ratio were obtained for all liver lesions. Statistical analysis was performed by using the Wilcoxon signed rank test, Student t test, and kappa test of agreement. Significant reduction of noise was observed in images postprocessed with filter F compared with the noise in baseline nonfiltered images (P =.004). Although the number of lesions seen on baseline images and that seen on postprocessed images were identical, lesions were less conspicuous on postprocessed images than on baseline images. A decrease in quantitative image noise and contrast-to-noise ratio for liver lesions was noted with all noise reduction filters. There was good interobserver agreement (kappa = 0.7). Although the use of currently available noise reduction filters improves image noise and ameliorates beam-hardening artifacts at low-dose CT, such filters are limited by a compromise in lesion conspicuity and appearance in comparison with lesion conspicuity and appearance on baseline low-dose CT images. Copyright RSNA, 2004
Ott, Sabine; Gölitz, Philipp; Adamek, Edyta; Royalty, Kevin; Doerfler, Arnd; Struffert, Tobias
2015-08-01
We compared flat-detector computed tomography angiography (FD-CTA) to multislice computed tomography (MS-CTA) and digital subtracted angiography (DSA) for the visualization of experimental aneurysms treated with stents, coils or a combination of both.In 20 rabbits, aneurysms were created using the rabbit elastase aneurysm model. Seven aneurysms were treated with coils, seven with coils and stents, and six with self-expandable stents alone. Imaging was performed by DSA, MS-CTA and FD-CTA immediately after treatment. Multiplanar reconstruction (MPR) was performed and two experienced reviewers compared aneurysm/coil package size, aneurysm occlusion, stent diameters and artifacts for each modality.In aneurysms treated with stents alone, the visualization of the aneurysms was identical in all three imaging modalities. Residual aneurysm perfusion was present in two cases and visible in DSA and FD-CTA but not in MS-CTA. The diameter of coil-packages was overestimated in MS-CT by 56% and only by 16% in FD-CTA compared to DSA (p < 0.05). The diameter of stents was identical for DSA and FD-CTA and was significantly overestimated in MS-CTA (p < 0.05). Beam/metal hardening artifacts impaired image quality more severely in MS-CTA compared to FD-CTA.MS-CTA is impaired by blooming and beam/metal hardening artifacts in the visualization of implanted devices. There was no significant difference between measurements made with noninvasive FD-CTA compared to gold standard of DSA after stenting and after coiling/stent-assisted coiling of aneurysms. FD-CTA may be considered as a non-invasive alternative to the gold standard 2D DSA in selected patients that require follow up imaging after stenting. © The Author(s) 2015.
Technology of Strengthening Steel Details by Surfacing Composite Coatings
NASA Astrophysics Data System (ADS)
Burov, V. G.; Bataev, A. A.; Rakhimyanov, Kh M.; Mul, D. O.
2016-04-01
The article considers the problem of forming wear resistant meal ceramic coatings on steel surfaces using the results of our own investigations and the analysis of achievements made in the country and abroad. Increasing the wear resistance of surface layers of steel details is achieved by surfacing composite coatings with carbides or borides of metals as disperse particles in the strengthening phase. The use of surfacing on wearing machine details and mechanisms has a history of more than 100 years. But still engineering investigations in this field are being conducted up to now. The use of heating sources which provide a high density of power allows ensuring temperature and time conditions of surfacing under which composites with peculiar service and functional properties are formed. High concentration of energy in the zone of melt, which is created from powder mixtures and the hardened surface layer, allows producing the transition zone between the main material and surfaced coating. Surfacing by the electron beam directed from vacuum to the atmosphere is of considerable technological advantages. They give the possibility of strengthening surface layers of large-sized details by surfacing powder mixtures without their preliminary compacting. A modified layer of the main metal with ceramic particles distributed in it is created as a result of heating surfaced powders and the detail surface layer by the electron beam. Technology of surfacing allows using powders of refractory metals and graphite in the composition of powder mixtures. They interact with one another and form the particles of the hardening phase of the composition coating. The chemical composition of the main and surfaced materials is considered to be the main factor which determines the character of metallurgical processes in local zones of melt as well as the structure and properties of surfaced composition.
Analytical N beam position monitor method
NASA Astrophysics Data System (ADS)
Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.
2017-11-01
Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.
Research on accuracy analysis of laser transmission system based on Zemax and Matlab
NASA Astrophysics Data System (ADS)
Chen, Haiping; Liu, Changchun; Ye, Haixian; Xiong, Zhao; Cao, Tingfen
2017-05-01
Laser transmission system is important in high power solid-state laser facilities and its function is to transfer and focus the light beam in accordance with the physical function of the facility. This system is mainly composed of transmission mirror modules and wedge lens module. In order to realize the precision alignment of the system, the precision alignment of the system is required to be decomposed into the allowable range of the calibration error of each module. The traditional method is to analyze the error factors of the modules separately, and then the linear synthesis is carried out, and the influence of the multi-module and multi-factor is obtained. In order to analyze the effect of the alignment error of each module on the beam center and focus more accurately, this paper aims to combine with the Monte Carlo random test and ray tracing, analyze influence of multi-module and multi-factor on the center of the beam, and evaluate and optimize the results of accuracy decomposition.
Mori, Shinichiro; Shibayama, Kouichi; Tanimoto, Katsuyuki; Kumagai, Motoki; Matsuzaki, Yuka; Furukawa, Takuji; Inaniwa, Taku; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi
2012-09-01
Our institute has constructed a new treatment facility for carbon ion scanning beam therapy. The first clinical trials were successfully completed at the end of November 2011. To evaluate patient setup accuracy, positional errors between the reference Computed Tomography (CT) scan and final patient setup images were calculated using 2D-3D registration software. Eleven patients with tumors of the head and neck, prostate and pelvis receiving carbon ion scanning beam treatment participated. The patient setup process takes orthogonal X-ray flat panel detector (FPD) images and the therapists adjust the patient table position in six degrees of freedom to register the reference position by manual or auto- (or both) registration functions. We calculated residual positional errors with the 2D-3D auto-registration function using the final patient setup orthogonal FPD images and treatment planning CT data. Residual error averaged over all patients in each fraction decreased from the initial to the last treatment fraction [1.09 mm/0.76° (averaged in the 1st and 2nd fractions) to 0.77 mm/0.61° (averaged in the 15th and 16th fractions)]. 2D-3D registration calculation time was 8.0 s on average throughout the treatment course. Residual errors in translation and rotation averaged over all patients as a function of date decreased with the passage of time (1.6 mm/1.2° in May 2011 to 0.4 mm/0.2° in December 2011). This retrospective residual positional error analysis shows that the accuracy of patient setup during the first clinical trials of carbon ion beam scanning therapy was good and improved with increasing therapist experience.
Surface Fatigue Resistance with Induction Hardening
NASA Technical Reports Server (NTRS)
Townsend, Dennis; Turza, Alan; Chapman, Mike
1996-01-01
Induction hardening has been used for some years to harden the surface and improve the strength and service life of gears and other components. Many applications that employ induction hardening require a relatively long time to finish the hardening process and controlling the hardness of the surface layer and its depth often was a problem. Other surface hardening methods, ie., carbonizing, take a very long time and tend to cause deformations of the toothing, whose elimination requires supplementary finishing work. In double-frequency induction hardening, one uses a low frequency for the preheating of the toothed wheel and a much higher frequency for the purpose of rapidly heating the surface by way of surface hardening.
Status of Multi-beam Long Trace-profiler Development
NASA Technical Reports Server (NTRS)
Gubarev, Mikhail V.; Merthe, Daniel J.; Kilaru, Kiranmayee; Kester, Thomas; Ramsey, Brian; McKinney, Wayne R.; Takacs, Peter Z.; Dahir, A.; Yashchuk, Valeriy V.
2013-01-01
The multi-beam long trace profiler (MB-LTP) is under development at NASA's Marshall Space Flight Center. The traditional LTPs scans the surface under the test by a single laser beam directly measuring the surface figure slope errors. While capable of exceptional surface slope accuracy, the LTP single beam scanning has slow measuring speed. Metrology efficiency can be increased by replacing the single laser beam with multiple beams that can scan a section of the test surface at a single instance. The increase in speed with such a system would be almost proportional to the number of laser beams. The progress for a multi-beam long trace profiler development is presented.
High-speed reference-beam-angle control technique for holographic memory drive
NASA Astrophysics Data System (ADS)
Yamada, Ken-ichiro; Ogata, Takeshi; Hosaka, Makoto; Fujita, Koji; Okuyama, Atsushi
2016-09-01
We developed a holographic memory drive for next-generation optical memory. In this study, we present the key technology for achieving a high-speed transfer rate for reproduction, that is, a high-speed control technique for the reference beam angle. In reproduction in a holographic memory drive, there is the issue that the optimum reference beam angle during reproduction varies owing to distortion of the medium. The distortion is caused by, for example, temperature variation, beam irradiation, and moisture absorption. Therefore, a reference-beam-angle control technique to position the reference beam at the optimum angle is crucial. We developed a new optical system that generates an angle-error-signal to detect the optimum reference beam angle. To achieve the high-speed control technique using the new optical system, we developed a new control technique called adaptive final-state control (AFSC) that adds a second control input to the first one derived from conventional final-state control (FSC) at the time of angle-error-signal detection. We established an actual experimental system employing AFSC to achieve moving control between each page (Page Seek) within 300 µs. In sequential multiple Page Seeks, we were able to realize positioning to the optimum angles of the reference beam that maximize the diffracted beam intensity. We expect that applying the new control technique to the holographic memory drive will enable a giga-bit/s-class transfer rate.
NASA Technical Reports Server (NTRS)
Thibodeaux, J. J.
1977-01-01
The results of a simulation study performed to determine the effects of gyro verticality error on lateral autoland tracking and landing performance are presented. A first order vertical gyro error model was used to generate the measurement of the roll attitude feedback signal normally supplied by an inertial navigation system. The lateral autoland law used was an inertially smoothed control design. The effect of initial angular gyro tilt errors (2 deg, 3 deg, 4 deg, and 5 deg), introduced prior to localizer capture, were investigated by use of a small perturbation aircraft simulation. These errors represent the deviations which could occur in the conventional attitude sensor as a result of the maneuver-induced spin-axis misalinement and drift. Results showed that for a 1.05 deg per minute erection rate and a 5 deg initial tilt error, ON COURSE autoland control logic was not satisfied. Failure to attain the ON COURSE mode precluded high control loop gains and localizer beam path integration and resulted in unacceptable beam standoff at touchdown.
A new polishing process for large-aperture and high-precision aspheric surface
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Dai, Yifan; Song, Ci
2013-07-01
The high-precision aspheric surface is hard to be achieved due to the mid-spatial frequency error in the finishing step. The influence of mid-spatial frequency error is studied through the simulations and experiments. In this paper, a new polishing process based on magnetorheological finishing (MRF), smooth polishing (SP) and ion beam figuring (IBF) is proposed. A 400mm aperture parabolic surface is polished with this new process. The smooth polishing (SP) is applied after rough machining to control the MSF error. In the middle finishing step, most of low-spatial frequency error is removed by MRF rapidly, then the mid-spatial frequency error is restricted by SP, finally ion beam figuring is used to finish the surface. The surface accuracy is improved from the initial 37.691nm (rms, 95% aperture) to the final 4.195nm. The results show that the new polishing process is effective to manufacture large-aperture and high-precision aspheric surface.
Characterization of the International Linear Collider damping ring optics
NASA Astrophysics Data System (ADS)
Shanks, J.; Rubin, D. L.; Sagan, D.
2014-10-01
A method is presented for characterizing the emittance dilution and dynamic aperture for an arbitrary closed lattice that includes guide field magnet errors, multipole errors and misalignments. This method, developed and tested at the Cornell Electron Storage Ring Test Accelerator (CesrTA), has been applied to the damping ring lattice for the International Linear Collider (ILC). The effectiveness of beam based emittance tuning is limited by beam position monitor (BPM) measurement errors, number of corrector magnets and their placement, and correction algorithm. The specifications for damping ring magnet alignment, multipole errors, number of BPMs, and precision in BPM measurements are shown to be consistent with the required emittances and dynamic aperture. The methodology is then used to determine the minimum number of position monitors that is required to achieve the emittance targets, and how that minimum depends on the location of the BPMs. Similarly, the maximum tolerable multipole errors are evaluated. Finally, the robustness of each BPM configuration with respect to random failures is explored.
de Freitas, Carolina P.; Cabot, Florence; Manns, Fabrice; Culbertson, William; Yoo, Sonia H.; Parel, Jean-Marie
2015-01-01
Purpose. To assess if a change in refractive index of the anterior chamber during femtosecond laser-assisted cataract surgery can affect the laser beam focus position. Methods. The index of refraction and chromatic dispersion of six ophthalmic viscoelastic devices (OVDs) was measured with an Abbe refractometer. Using the Gullstrand eye model, the index values were used to predict the error in the depth of a femtosecond laser cut when the anterior chamber is filled with OVD. Two sources of error produced by the change in refractive index were evaluated: the error in anterior capsule position measured with optical coherence tomography biometry and the shift in femtosecond laser beam focus depth. Results. The refractive indices of the OVDs measured ranged from 1.335 to 1.341 in the visible light (at 587 nm). The error in depth measurement of the refilled anterior chamber ranged from −5 to +7 μm. The OVD produced a shift of the femtosecond laser focus ranging from −1 to +6 μm. Replacement of the aqueous humor with OVDs with the densest compound produced a predicted error in cut depth of 13 μm anterior to the expected cut. Conclusions. Our calculations show that the change in refractive index due to anterior chamber refilling does not sufficiently shift the laser beam focus position to cause the incomplete capsulotomies reported during femtosecond laser–assisted cataract surgery. PMID:25626971
NASA Technical Reports Server (NTRS)
Lauenstein, Jean-Marie; Casey, Megan
2017-01-01
Silicon carbide power device technology has the potential to enable a new generation of aerospace power systems that demand high efficiency, rapid switching, and reduced mass and volume in order to expand space-based capabilities. For this potential to be realized, SiC devices must be capable of withstanding the harsh space radiation environment. Commercial SiC components exhibit high tolerance to total ionizing dose but to date, have not performed well under exposure to heavy ion radiation representative of the on-orbit galactic cosmic rays. Insertion of SiC power device technology into space applications to achieve breakthrough performance gains will require intentional development of components hardened to the effects of these highly-energetic heavy ions. This work presents heavy-ion test data obtained by the authors over the past several years for discrete SiC power MOSFETs, JFETs, and diodes in order to increase the body of knowledge and understanding that will facilitate hardening of this technology to space radiation effects. Specifically, heavy-ion irradiation data taken under different bias, temperature, and ion beam conditions is presented for devices from different manufacturers, and the emerging patterns discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Liang; Wang, Lu; Nie, Zhihua
Laser shock peening (LSP) with different cycles was performed on the Ti-based bulk metallic glasses (BMGs). The sub-surface residual stress of the LSPed specimens was measured by high-energy X-ray diffraction (HEXRD) and the near-surface residual stress was measured by scanning electron microscope/focused ion beam (SEM/FIB) instrument. The sub-surface residual stress in the LSP impact direction (about-170MPa) is much lower than that perpendicular to the impact direction (about -350 MPa), exhibiting anisotropy. The depth of the compressive stress zone increases from 400 mu m to 500 mu m with increasing LSP cycles. The highest near-surface residual stress is about -750 MPa.more » LSP caused the free volume to increase and the maximum increase appeared after the first LSP process. Compared with the hardness (567 +/- 7 HV) of the as-cast BMG, the hardness (590 +/- 9 HV) on the shocked surface shows a hardening effect due to the hardening mechanism of compressive residual stress; and the hardness (420 +/- 9 HV) on the longitudinal section shows a softening effect due to the softening mechanism of free volume.« less
Generation of dark hollow beam via coherent combination based on adaptive optics.
Zheng, Yi; Wang, Xiaohua; Shen, Feng; Li, Xinyang
2010-12-20
A novel method for generating a dark hollow beam (DHB) is proposed and studied both theoretically and experimentally. A coherent combination technique for laser arrays is implemented based on adaptive optics (AO). A beam arraying structure and an active segmented mirror are designed and described. Piston errors are extracted by a zero-order interference detection system with the help of a custom-made photo-detectors array. An algorithm called the extremum approach is adopted to calculate feedback control signals. A dynamic piston error is imported by LiNbO3 to test the capability of the AO servo. In a closed loop the stable and clear DHB is obtained. The experimental results confirm the feasibility of the concept.
SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L; Gopan, O
Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less
Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali; Liaw, Shien-Kuei
2016-01-01
Joint effects of aperture averaging and beam width on the performance of free-space optical communication links, under the impairments of atmospheric loss, turbulence, and pointing errors (PEs), are investigated from an information theory perspective. The propagation of a spatially partially coherent Gaussian-beam wave through a random turbulent medium is characterized, taking into account the diverging and focusing properties of the optical beam as well as the scintillation and beam wander effects. Results show that a noticeable improvement in the average channel capacity can be achieved with an enlarged receiver aperture in the moderate-to-strong turbulence regime, even without knowledge of the channel state information. In particular, it is observed that the optimum beam width can be reduced to improve the channel capacity, albeit the presence of scintillation and PEs, given that either one or both of these adverse effects are least dominant. We show that, under strong turbulence conditions, the beam width increases linearly with the Rytov variance for a relatively smaller PE loss but changes exponentially with steeper increments for higher PE losses. Our findings conclude that the optimal beam width is dependent on the combined effects of turbulence and PEs, and this parameter should be adjusted according to the varying atmospheric channel conditions. Therefore, we demonstrate that the maximum channel capacity is best achieved through the introduction of a larger receiver aperture and a beam-width optimization technique.
Beam-specific planning volumes for scattered-proton lung radiotherapy
NASA Astrophysics Data System (ADS)
Flampouri, S.; Hoppe, B. S.; Slopsema, R. L.; Li, Z.
2014-08-01
This work describes the clinical implementation of a beam-specific planning treatment volume (bsPTV) calculation for lung cancer proton therapy and its integration into the treatment planning process. Uncertainties incorporated in the calculation of the bsPTV included setup errors, machine delivery variability, breathing effects, inherent proton range uncertainties and combinations of the above. Margins were added for translational and rotational setup errors and breathing motion variability during the course of treatment as well as for their effect on proton range of each treatment field. The effect of breathing motion and deformation on the proton range was calculated from 4D computed tomography data. Range uncertainties were considered taking into account the individual voxel HU uncertainty along each proton beamlet. Beam-specific treatment volumes generated for 12 patients were used: a) as planning targets, b) for routine plan evaluation, c) to aid beam angle selection and d) to create beam-specific margins for organs at risk to insure sparing. The alternative planning technique based on the bsPTVs produced similar target coverage as the conventional proton plans while better sparing the surrounding tissues. Conventional proton plans were evaluated by comparing the dose distributions per beam with the corresponding bsPTV. The bsPTV volume as a function of beam angle revealed some unexpected sources of uncertainty and could help the planner choose more robust beams. Beam-specific planning volume for the spinal cord was used for dose distribution shaping to ensure organ sparing laterally and distally to the beam.
Defining robustness protocols: a method to include and evaluate robustness in clinical plans
NASA Astrophysics Data System (ADS)
McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.
2015-04-01
We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.
Flat-panel cone-beam CT: a novel imaging technology for image-guided procedures
NASA Astrophysics Data System (ADS)
Siewerdsen, Jeffrey H.; Jaffray, David A.; Edmundson, Gregory K.; Sanders, W. P.; Wong, John W.; Martinez, Alvaro A.
2001-05-01
The use of flat-panel imagers for cone-beam CT signals the emergence of an attractive technology for volumetric imaging. Recent investigations demonstrate volume images with high spatial resolution and soft-tissue visibility and point to a number of logistical characteristics (e.g., open geometry, volume acquisition in a single rotation about the patient, and separation of the imaging and patient support structures) that are attractive to a broad spectrum of applications. Considering application to image-guided (IG) procedures - specifically IG therapies - this paper examines the performance of flat-panel cone-beam CT in relation to numerous constraints and requirements, including time (i.e., speed of image acquisition), dose, and field-of-view. The imaging and guidance performance of a prototype flat panel cone-beam CT system is investigated through the construction of procedure-specific tasks that test the influence of image artifacts (e.g., x-ray scatter and beam-hardening) and volumetric imaging performance (e.g., 3D spatial resolution, noise, and contrast) - taking two specific examples in IG brachytherapy and IG vertebroplasty. For IG brachytherapy, a procedure-specific task is constructed which tests the performance of flat-panel cone-beam CT in measuring the volumetric distribution of Pd-103 permanent implant seeds in relation to neighboring bone and soft-tissue structures in a pelvis phantom. For IG interventional procedures, a procedure-specific task is constructed in the context of vertebroplasty performed on a cadaverized ovine spine, demonstrating the volumetric image quality in pre-, intra-, and post-therapeutic images of the region of interest and testing the performance of the system in measuring the volumetric distribution of bone cement (PMMA) relative to surrounding spinal anatomy. Each of these tasks highlights numerous promising and challenging aspects of flat-panel cone-beam CT applied to IG procedures.
Spectral imaging using clinical megavoltage beams and a novel multi-layer imager
NASA Astrophysics Data System (ADS)
Myronakis, Marios; Fueglistaller, Rony; Rottmann, Joerg; Hu, Yue-Houng; Wang, Adam; Baturin, Paul; Huber, Pascal; Morf, Daniel; Star-Lack, Josh; Berbeco, Ross
2017-12-01
We assess the feasibility of clinical megavoltage (MV) spectral imaging for material and bone separation with a novel multi-layer imager (MLI) prototype. The MLI provides higher detective quantum efficiency and lower noise than conventional electronic portal imagers. Simulated experiments were performed using a validated Monte Carlo model of the MLI to estimate energy absorption and energy separation between the MLI components. Material separation was evaluated experimentally using solid water and aluminum (Al), copper (Cu) and gold (Au) for 2.5 MV, 6 MV and 6 MV flattening filter free (FFF) clinical photon beams. An anthropomorphic phantom with implanted gold fiducials was utilized to further demonstrate bone/gold separation. Weighted subtraction imaging was employed for material and bone separation. The weighting factor (w) was iteratively estimated, with the optimal w value determined by minimization of the relative signal difference (Δ {{S}R} ) and signal-difference-to-noise ratio (SDNR) between material (or bone) and the background. Energy separation between layers of the MLI was mainly the result of beam hardening between components with an average energy separation between 34 and 47 keV depending on the x-ray beam energy. The minimum average energy of the detected spectrum in the phosphor layer was 123 keV in the top layer of the MLI with the 2.5 MV beam. The w values that minimized Δ {{S}R} and SDNR for Al, Cu and Au were 0.89, 0.76 and 0.64 for 2.5 MV; for 6 MV FFF, w was 0.98, 0.93 and 0.77 respectively. Bone suppression in the anthropomorphic phantom resulted in improved visibility of the gold fiducials with the 2.5 MV beam. Optimization of the MLI design is required to achieve optimal separation at clinical MV beam energies.
High performance Si immersion gratings patterned with electron beam lithography
NASA Astrophysics Data System (ADS)
Gully-Santiago, Michael A.; Jaffe, Daniel T.; Brooks, Cynthia B.; Wilson, Daniel W.; Muller, Richard E.
2014-07-01
Infrared spectrographs employing silicon immersion gratings can be significantly more compact than spectro- graphs using front-surface gratings. The Si gratings can also offer continuous wavelength coverage at high spectral resolution. The grooves in Si gratings are made with semiconductor lithography techniques, to date almost entirely using contact mask photolithography. Planned near-infrared astronomical spectrographs require either finer groove pitches or higher positional accuracy than standard UV contact mask photolithography can reach. A collaboration between the University of Texas at Austin Silicon Diffractive Optics Group and the Jet Propulsion Laboratory Microdevices Laboratory has experimented with direct writing silicon immersion grating grooves with electron beam lithography. The patterning process involves depositing positive e-beam resist on 1 to 30 mm thick, 100 mm diameter monolithic crystalline silicon substrates. We then use the facility JEOL 9300FS e-beam writer at JPL to produce the linear pattern that defines the gratings. There are three key challenges to produce high-performance e-beam written silicon immersion gratings. (1) E- beam field and subfield stitching boundaries cause periodic cross-hatch structures along the grating grooves. The structures manifest themselves as spectral and spatial dimension ghosts in the diffraction limited point spread function (PSF) of the diffraction grating. In this paper, we show that the effects of e-beam field boundaries must be mitigated. We have significantly reduced ghost power with only minor increases in write time by using four or more field sizes of less than 500 μm. (2) The finite e-beam stage drift and run-out error cause large-scale structure in the wavefront error. We deal with this problem by applying a mark detection loop to check for and correct out minuscule stage drifts. We measure the level and direction of stage drift and show that mark detection reduces peak-to-valley wavefront error by a factor of 5. (3) The serial write process for typical gratings yields write times of about 24 hours- this makes prototyping costly. We discuss work with negative e-beam resist to reduce the fill factor of exposure, and therefore limit the exposure time. We also discuss the tradeoffs of long write-time serial write processes like e-beam with UV photomask lithography. We show the results of experiments on small pattern size prototypes on silicon wafers. Current prototypes now exceed 30 dB of suppression on spectral and spatial dimension ghosts compared to monochromatic spectral purity measurements of the backside of Si echelle gratings in reflection at 632 nm. We perform interferometry at 632 nm in reflection with a 25 mm circular beam on a grating with a blaze angle of 71.6°. The measured wavefront error is 0.09 waves peak to valley.
NASA Astrophysics Data System (ADS)
Jiang, YuXiao; Guo, PengLiang; Gao, ChengYan; Wang, HaiBo; Alzahrani, Faris; Hobiny, Aatef; Deng, FuGuo
2017-12-01
We present an original self-error-rejecting photonic qubit transmission scheme for both the polarization and spatial states of photon systems transmitted over collective noise channels. In our scheme, we use simple linear-optical elements, including half-wave plates, 50:50 beam splitters, and polarization beam splitters, to convert spatial-polarization modes into different time bins. By using postselection in different time bins, the success probability of obtaining the uncorrupted states approaches 1/4 for single-photon transmission, which is not influenced by the coefficients of noisy channels. Our self-error-rejecting transmission scheme can be generalized to hyperentangled n-photon systems and is useful in practical high-capacity quantum communications with photon systems in two degrees of freedom.
The Effect of Grain Size on the Strain Hardening Behavior for Extruded ZK61 Magnesium Alloy
NASA Astrophysics Data System (ADS)
Zhang, Lixin; Zhang, Wencong; Chen, Wenzhen; Duan, Junpeng; Wang, Wenke; Wang, Erde
2017-12-01
The effects of grain size on the tensile and compressive strain hardening behaviors for extruded ZK61 alloys have been investigated by uniaxial tensile and compressive tests along the extrusion directions. Cylindrical tension and compression specimens of extruded ZK61 alloys with various sized grain were fabricated by annealing treatments. Tensile and compressive tests at ambient temperature were conducted at a strain rate of 0.5 × 10-3 s-1. The results indicate that both tensile strain hardening and compressive strain hardening of ZK61 alloys with different grain sizes have an athermal regime of dislocation accumulation in early deformation. The threshold stress value caused dynamic recovery is predominantly related to grain size in tensile strain hardening, but the threshold stress values for different grain sizes are almost identical in compressive strain hardening. There are obvious transition points on the tensile strain hardening curves which indicate the occurrence of dynamic recrystallization (DRX). The tensile strain hardening rate of the coarse-grained alloy obviously decreases faster than that of fine-grained alloys before DRX and the tensile strain hardening curves of different grain sizes basically tend to parallel after DRX. The compressive strain hardening rate of the fine-grained alloy obviously increases faster than that of coarse-grained alloy for twin-induced strain hardening, but compressive strain hardening curves also tend to parallel after twinning is exhausted.
Electron Beam Welding of Gear Wheels by Splitted Beam
NASA Astrophysics Data System (ADS)
Dřímal, Daniel
2014-06-01
This contribution deals with the issue of electron beam welding of high-accurate gear wheels composed of a spur gearing and fluted shaft joined with a face weld for automotive industry. Both parts made of the high-strength low-alloy steel are welded in the condition after final machining and heat treatment, performed by case hardening, whereas it is required that the run-out in the critical point of weldment after welding, i. e. after the final operation, would be 0.04 mm max.. In case of common welding procedure, cracks were formed in the weld, initiated by spiking in the weld root. Crack formation was prevented by the use of an interlocking joint with a rounded recess and suitable welding parameters, eliminating crack initiation by spiking in the weld root. Minimisation of the welding distortions was achieved by the application of tack welding with simultaneous splitting of one beam into two parts in the opposite sections of circumferential face weld attained on the principle of a new system of controlled deflection with digital scanning of the beam. This welding procedure assured that the weldment temperature after welding would not be higher than 400 °C. Thus, this procedure allowed achieving the final run-outs in the critical point of gearwheels within the maximum range up to 0.04 mm, which is acceptable for the given application. Accurate optical measurements did not reveal any changes in the teeth dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashii, Haruko, E-mail: haruko@pmrc.tsukuba.ac.jp; Hashimoto, Takayuki; Okawa, Ayako
2013-03-01
Purpose: Radiation therapy for cancer may be required for patients with implantable cardiac devices. However, the influence of secondary neutrons or scattered irradiation from high-energy photons (≥10 MV) on implantable cardioverter-defibrillators (ICDs) is unclear. This study was performed to examine this issue in 2 ICD models. Methods and Materials: ICDs were positioned around a water phantom under conditions simulating clinical radiation therapy. The ICDs were not irradiated directly. A control ICD was positioned 140 cm from the irradiation isocenter. Fractional irradiation was performed with 18-MV and 10-MV photon beams to give cumulative in-field doses of 600 Gy and 1600 Gy,more » respectively. Errors were checked after each fraction. Soft errors were defined as severe (change to safety back-up mode), moderate (memory interference, no changes in device parameters), and minor (slight memory change, undetectable by computer). Results: Hard errors were not observed. For the older ICD model, the incidences of severe, moderate, and minor soft errors at 18 MV were 0.75, 0.5, and 0.83/50 Gy at the isocenter. The corresponding data for 10 MV were 0.094, 0.063, and 0 /50 Gy. For the newer ICD model at 18 MV, these data were 0.083, 2.3, and 5.8 /50 Gy. Moderate and minor errors occurred at 18 MV in control ICDs placed 140 cm from the isocenter. The error incidences were 0, 1, and 0 /600 Gy at the isocenter for the newer model, and 0, 1, and 6 /600Gy for the older model. At 10 MV, no errors occurred in control ICDs. Conclusions: ICD errors occurred more frequently at 18 MV irradiation, which suggests that the errors were mainly caused by secondary neutrons. Soft errors of ICDs were observed with high energy photon beams, but most were not critical in the newer model. These errors may occur even when the device is far from the irradiation field.« less
2005-12-01
hardening exponent and Cimp is the impression strain-rate hardening coefficient. The strain-rate hardening exponent m is a parameter that is...exponent and Cimp is the impression strain-rate hardening coefficient. The strain-rate hardening exponent m is a parameter that is related to the creep
Combined electron beam imaging and ab initio modeling of T1 precipitates in Al-Li-Cu alloys
NASA Astrophysics Data System (ADS)
Dwyer, C.; Weyland, M.; Chang, L. Y.; Muddle, B. C.
2011-05-01
Among the many considerable challenges faced in developing a rational basis for advanced alloy design, establishing accurate atomistic models is one of the most fundamental. Here we demonstrate how advanced imaging techniques in a double-aberration-corrected transmission electron microscope, combined with ab initio modeling, have been used to determine the atomic structure of embedded 1 nm thick T1 precipitates in precipitation-hardened Al-Li-Cu aerospace alloys. The results provide an accurate determination of the controversial T1 structure, and demonstrate how next-generation techniques permit the characterization of embedded nanostructures in alloys and other nanostructured materials.
Self-ion irradiation effects on mechanical properties of nanocrystalline zirconium films
Wang, Baoming; Haque, M. A.; Tomar, Vikas; ...
2017-07-13
Zirconium thin films were irradiated at room temperature with an 800 keV Zr + beam using a 6 MV HVE Tandem accelerator to 1.36 displacement per atom damage. Freestanding tensile specimens, 100 nm thick and 10 nm grain size, were tested in-situ inside a transmission electron microscope. Significant grain growth (>300%), texture evolution, and displacement damage defects were observed. Here, stress-strain profiles were mostly linear elastic below 20 nm grain size, but above this limit the samples demonstrated yielding and strain hardening. Experimental results support the hypothesis that grain boundaries in nanocrystalline metals act as very effective defect sinks.
Challenges and Plans for Injection and Beam Dump
NASA Astrophysics Data System (ADS)
Barnes, M.; Goddard, B.; Mertens, V.; Uythoven, J.
The injection and beam dumping systems of the LHC will need to be upgraded to comply with the requirements of operation with the HL-LHC beams. The elements of the injection system concerned are the fixed and movable absorbers which protect the LHC in case of an injection kicker error and the injection kickers themselves. The beam dumping system elements under study are the absorbers which protect the aperture in case of an asynchronous beam dump and the beam absorber block. The operational limits of these elements and the new developments in the context of the HL-LHC project are described.
Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J
2012-11-21
The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(t(i)) and the projected marker positions p(x(p), y(p); t(i)) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(x(p), y(p); t(i)) - P(θ(i)) · (aR(t(i)) + bR(t(i) - τ) + c)‖(2) with the projection operator P(θ(i)). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.
NASA Astrophysics Data System (ADS)
Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J.
2012-11-01
The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(ti) and the projected marker positions p(xp, yp; ti) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(xp, yp; ti) - P(θi) · (aR(ti) + bR(ti - τ) + c)‖2 with the projection operator P(θi). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.
Dynamic Target Definition: a novel approach for PTV definition in ion beam therapy.
Cabal, Gonzalo A; Jäkel, Oliver
2013-05-01
To present a beam arrangement specific approach for PTV definition in ion beam therapy. By means of a Monte Carlo error propagation analysis a criteria is formulated to assess whether a voxel is safely treated. Based on this a non-isotropical expansion rule is proposed aiming to minimize the impact of uncertainties on the dose delivered. The method is exemplified in two cases: a Head and Neck case and a Prostate case. In both cases the modality used is proton beam irradiation and the sources of uncertainties taken into account are positioning (set up) errors and range uncertainties. It is shown how different beam arrangements have an impact on plan robustness which leads to different target expansions necessary to assure a predefined level of plan robustness. The relevance of appropriate beam angle arrangements as a way to minimize uncertainties is demonstrated. A novel method for PTV definition in on beam therapy is presented. The method show promising results by improving the probability of correct dose CTV coverage while reducing the size of the PTV volume. In a clinical scenario this translates into an enhanced tumor control probability while reducing the volume of healthy tissue being irradiated. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Toramatsu, Chie; Inaniwa, Taku
2016-12-01
In charged particle therapy with pencil beam scanning (PBS), localization of the dose in the Bragg peak makes dose distributions sensitive to lateral tissue heterogeneities. The sensitivity of a PBS plan to lateral tissue heterogeneities can be reduced by selecting appropriate beam angles. The purpose of this study is to develop a fast and accurate method of beam angle selection for PBS. The lateral tissue heterogeneity surrounding the path of the pencil beams at a given angle was quantified with the heterogeneity number representing the variation of the Bragg peak depth across the cross section of the beams using the stopping power ratio of body tissues with respect to water. To shorten the computation time, one-dimensional dose optimization was conducted along the central axis of the pencil beams as they were directed by the scanning magnets. The heterogeneity numbers were derived for all possible beam angles for treatment. The angles leading to the minimum mean heterogeneity number were selected as the optimal beam angle. Three clinical cases of head and neck cancer were used to evaluate the developed method. Dose distributions and their robustness to setup and range errors were evaluated for all tested angles, and their relation to the heterogeneity numbers was investigated. The mean heterogeneity number varied from 1.2 mm-10.6 mm in the evaluated cases. By selecting a field with a low mean heterogeneity number, target dose coverage and robustness against setup and range errors were improved. The developed method is simple, fast, accurate and applicable for beam angle selection in charged particle therapy with PBS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L.; Li, Y.
2015-02-03
This paper analyzes the longitudinal space charge impedances of a round uniform beam inside a rectangular and parallel plate chambers using the image charge method. This analysis is valid for arbitrary wavelengths, and the calculations converge rapidly. The research shows that only a few of the image beams are needed to obtain a relative error less than 0.1%. The beam offset effect is also discussed in the analysis.
Coherent beam combiner for a high power laser
Dane, C. Brent; Hackel, Lloyd A.
2002-01-01
A phase conjugate laser mirror employing Brillouin-enhanced four wave mixing allows multiple independent laser apertures to be phase locked producing an array of diffraction-limited beams with no piston phase errors. The beam combiner has application in laser and optical systems requiring high average power, high pulse energy, and low beam divergence. A broad range of applications exist in laser systems for industrial processing, especially in the field of metal surface treatment and laser shot peening.
NASA Astrophysics Data System (ADS)
Diederich, M.; Ryzhkov, A.; Simmer, C.; Mühlbauer, K.
2011-12-01
The amplitude a of radar wave reflected by meteorological targets can be misjudged due to several factors. At X band wavelength, attenuation of the radar beam by hydro meteors reduces the signal strength enough to be a significant source of error for quantitative precipitation estimation. Depending on the surrounding orography, the radar beam may be partially blocked when scanning at low elevation angles, and the knowledge of the exact amount of signal loss through beam blockage becomes necessary. The phase shift between the radar signals at horizontal and vertical polarizations is affected by the hydrometeors that the beam travels through, but remains unaffected by variations in signal strength. This has allowed for several ways of compensating for the attenuation of the signal, and for consistency checks between these variables. In this study, we make use of several weather radars and gauge network measuring in the same area to examine the effectiveness of several methods of attenuation and beam blockage corrections. The methods include consistency checks of radar reflectivity and specific differential phase, calculation of beam blockage using a topography map, estimating attenuation using differential propagation phase, and the ZPHI method proposed by Testud et al. in 2000. Results show the high effectiveness of differential phase in estimating attenuation, and potential of the ZPHI method to compensate attenuation, beam blockage, and calibration errors.
Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity
NASA Astrophysics Data System (ADS)
Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.
2017-10-01
A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Ion beam figuring of high-slope surfaces based on figure error compensation algorithm.
Dai, Yifan; Liao, Wenlin; Zhou, Lin; Chen, Shanyong; Xie, Xuhui
2010-12-01
In a deterministic figuring process, it is critical to guarantee high stability of the removal function as well as the accuracy of the dwell time solution, which directly influence the convergence of the figuring process. Hence, when figuring steep optics, the ion beam is required to keep a perpendicular incidence, and a five-axis figuring machine is typically utilized. In this paper, however, a method for high-precision figuring of high-slope optics is proposed with a linear three-axis machine, allowing for inclined beam incidence. First, the changing rule of the removal function and the normal removal rate with the incidence angle is analyzed according to the removal characteristics of ion beam figuring (IBF). Then, we propose to reduce the influence of varying removal function and projection distortion on the dwell time solution by means of figure error compensation. Consequently, the incident ion beam is allowed to keep parallel to the optical axis. Simulations and experiments are given to verify the removal analysis. Finally, a figuring experiment is conducted on a linear three-axis IBF machine, which proves the validity of the method for high-slope surfaces. It takes two iterations and about 9 min to successfully figure a fused silica sample, whose aperture is 21.3 mm and radius of curvature is 16 mm. The root-mean-square figure error of the convex surface is reduced from 13.13 to 5.86 nm.
Effect of preheating on fatigue resistance of gears in spin induction coil hardening process
NASA Astrophysics Data System (ADS)
Kumar, Pawan; Aggarwal, M. L.
2018-02-01
Spin hardening inductors are typically used for fine-sized teeth gear geometry. With the proper selection of several design parameters, only the gear teeth can be case surface hardened without affecting the other surface of gear. Preheating may be done to reach an adapted high austenitizing temperature in the root circle to avoid overheating of the tooth tip during final heating. The effect of preheating of gear on control of compressive residual stresses and case hardening has been experimentally discussed in this paper. Present work is about analysing single frequency mode, preheat hardening treatment and compressive residual stresses field for hardening process of spur gear using spin hardening inductors.
Optimizing X-ray mirror thermal performance using matched profile cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lin; Cocco, Daniele; Kelez, Nicholas
2015-08-07
To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less
Tensile and compressive behavior of Borsic/aluminum
NASA Technical Reports Server (NTRS)
Herakovich, C. T.; Davis, J. G., Jr.; Viswanathan, C. N.
1977-01-01
The results of an experimental investigation of the mechanical behavior of Borsic/aluminum are presented. Composite laminates were tested in tension and compression for monotonically increasing load and also for variable loading cycles in which the maximum load was increased in each successive cycle. It is shown that significant strain-hardening, and corresponding increase in yield stress, is exhibited by the metal matrix laminates. For matrix dominated laminates, the current yield stress is essentially identical to the previous maximum stress, and unloading is essentially linear with large permanent strains after unloading. For laminates with fiber dominated behavior, the yield stress increases with increase in the previous maximum stress, but the increase in yield stress does not keep pace with the previous maximum stress. These fiber dominated laminates exhibit smaller nonlinear strains, reversed nonlinear behavior during unloading, and smaller permanent strains after unloading. Compression results from sandwich beams and flat coupons are shown to differ considerably. Results from beam specimens tend to exhibit higher values for modulus, yield stress, and strength.
NASA Astrophysics Data System (ADS)
Liu, Jing; Gao, Xiao-Long; Zhang, Lin-Jie; Zhang, Jian-Xun
2015-01-01
The aim of this investigation was to evaluate the effect of microstructure heterogeneity on the tensile and low cycle fatigue properties of electron beam welded (EBW) Ti6Al4V sheets. To achieve this goal, the tensile and low cycle fatigue property in the EBW joints and base metal (BM) specimens is compared. During the tensile testing, digital image correlation technology was used to measure the plastic strain field evolution within the specimens. The experimental results showed that the tensile ductility and low cycle fatigue strength of EBW joints are lower than that of BM specimens, mainly because of the effect of microstructure heterogeneity of the welded joint. Moreover, the EBW joints exhibit the cyclic hardening behavior during low fatigue process, while BM specimens exhibit the cyclic softening behavior. Compared with the BM specimens with uniform microstructure, the heterogeneity of microstructure in the EBW joint is found to decrease the mechanical properties of welded joint.
Status of a Power Processor for the Prometheus-1 Electric Propulsion System
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Hill, Gerald M.; Aulisio, Michael; Gerber, Scott; Griebeler, Elmer; Hewitt, Frank; Scina, Joseph
2006-01-01
NASA is developing technologies for nuclear electric propulsion for proposed deep space missions in support of the Exploration initiative under Project Prometheus. Electrical power produced by the combination of a fission-based power source and a Brayton power conversion and distribution system is used by a high specific impulse ion propulsion system to propel the spaceship. The ion propulsion system include the thruster, power processor and propellant feed system. A power processor technology development effort was initiated under Project Prometheus to develop high performance and lightweight power-processing technologies suitable for the application. This effort faces multiple challenges including developing radiation hardened power modules and converters with very high power capability and efficiency to minimize the impact on the power conversion and distribution system as well as the heat rejection system. This paper documents the design and test results of the first version of the beam supply, the design of a second version of the beam supply and the design and test results of the ancillary supplies.
Neldam, Camilla Albeck; Pinholt, Else Marie
2014-09-01
Today X-ray micro computer tomography (μCT) imaging is used to investigate bone microarchitecture. μCT imaging is obtained by polychromatic X-ray beams, resulting in images with beam hardening artifacts, resolution levels at 10 μm, geometrical blurring, and lack of contrasts. When μCT is coupled to synchrotron sources (SRμCT) a spatial resolution up to one tenth of a μm may be achieved. A review of the literature concerning SRμCT was performed to investigate its usability and its strength in visualizing fine bone structures, vessels, and microarchitecture of bone. Although mainly limited to in vitro examinations, SRμCT is considered as a gold standard to image trabecular bone microarchitecture since it is possible in a 3D manner to visualize fine structural elements within mineralized tissue such as osteon boundaries, rods and plates structures, cement lines, and differences in mineralization. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Surface modification of steels and magnesium alloy by high current pulsed electron beam
NASA Astrophysics Data System (ADS)
Hao, Shengzhi; Gao, Bo; Wu, Aimin; Zou, Jianxin; Qin, Ying; Dong, Chuang; An, Jian; Guan, Qingfeng
2005-11-01
High current pulsed electron beam (HCPEB) is now developing as a useful tool for surface modification of materials. When concentrated electron flux transferring its energy into a very thin surface layer within a short pulse time, superfast processes such as heating, melting, evaporation and consequent solidification, as well as dynamic stress induced may impart the surface layer with improved physico-chemical and mechanical properties. This paper presents our research work on surface modification of steels and magnesium alloy with HCPEB of working parameters as electron energy 27 keV, pulse duration ∼1 μs and energy density ∼2.2 J/cm2 per pulse. Investigations performed on carbon steel T8, mold steel D2 and magnesium alloy AZ91HP have shown that the most pronounced changes of phase-structure state and properties occurring in the near-surface layers, while the thickness of the modified layer with improved microhardness (several hundreds of micrometers) is significantly greater than that of the heat-affected zone. The formation mechanisms of surface cratering and non-stationary hardening effect in depth are discussed based on the elucidation of non-equilibrium temperature filed and different kinds of stresses formed during pulsed electron beam melting treatment. After the pulsed electron beam treatments, samples show significant improvements in measurements of wear and corrosion resistance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marous, L; Muryn, J; Liptak, C
2016-06-15
Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less
Barnes, M P; Ebert, M A
2008-03-01
The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.
Deterministic ion beam material adding technology for high-precision optical surfaces.
Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin
2013-02-20
Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.
Maritime Adaptive Optics Beam Control
2010-09-01
Liquid Crystal LMS Least Mean Square MIMO Multiple- Input Multiple-Output MMDM Micromachined Membrane Deformable Mirror MSE Mean Square Error...determine how the beam is distorted, a control computer to calculate the correction to be applied, and a corrective element, usually a deformable mirror ...during this research, an overview of the system modification is provided here. Using additional mirrors and reflecting the beam to and from an
Surface hardening of titanium alloys with melting depth controlled by heat sink
Oden, Laurance L.; Turner, Paul C.
1995-01-01
A process for forming a hard surface coating on titanium alloys includes providing a piece of material containing titanium having at least a portion of one surface to be hardened. The piece having a portion of a surface to be hardened is contacted on the backside by a suitable heat sink such that the melting depth of said surface to be hardened may be controlled. A hardening material is then deposited as a slurry. Alternate methods of deposition include flame, arc, or plasma spraying, electrodeposition, vapor deposition, or any other deposition method known by those skilled in the art. The surface to be hardened is then selectively melted to the desired depth, dependent on the desired coating thickness, such that a molten pool is formed of the piece surface and the deposited hardening material. Upon cooling a hardened surface is formed.
Automatic Alignment of Displacement-Measuring Interferometer
NASA Technical Reports Server (NTRS)
Halverson, Peter; Regehr, Martin; Spero, Robert; Alvarez-Salazar, Oscar; Loya, Frank; Logan, Jennifer
2006-01-01
A control system strives to maintain the correct alignment of a laser beam in an interferometer dedicated to measuring the displacement or distance between two fiducial corner-cube reflectors. The correct alignment of the laser beam is parallel to the line between the corner points of the corner-cube reflectors: Any deviation from parallelism changes the length of the optical path between the reflectors, thereby introducing a displacement or distance measurement error. On the basis of the geometrical optics of corner-cube reflectors, the length of the optical path can be shown to be L = L(sub 0)cos theta, where L(sub 0) is the distance between the corner points and theta is the misalignment angle. Therefore, the measurement error is given by DeltaL = L(sub 0)(cos theta - 1). In the usual case in which the misalignment is small, this error can be approximated as DeltaL approximately equal to -L(sub 0)theta sup 2/2. The control system (see figure) is implemented partly in hardware and partly in software. The control system includes three piezoelectric actuators for rapid, fine adjustment of the direction of the laser beam. The voltages applied to the piezoelectric actuators include components designed to scan the beam in a circular pattern so that the beam traces out a narrow cone (60 microradians wide in the initial application) about the direction in which it is nominally aimed. This scan is performed at a frequency (2.5 Hz in the initial application) well below the resonance frequency of any vibration of the interferometer. The laser beam makes a round trip to both corner-cube reflectors and then interferes with the launched beam. The interference is detected on a photodiode. The length of the optical path is measured by a heterodyne technique: A 100- kHz frequency shift between the launched beam and a reference beam imposes, on the detected signal, an interferometric phase shift proportional to the length of the optical path. A phase meter comprising analog filters and specialized digital circuitry converts the phase shift to an indication of displacement, generating a digital signal proportional to the path length.
NASA Astrophysics Data System (ADS)
Pengvanich, P.; Chernin, D. P.; Lau, Y. Y.; Luginsland, J. W.; Gilgenbach, R. M.
2007-11-01
Motivated by the current interest in mm-wave and THz sources, which use miniature, difficult-to-fabricate circuit components, we evaluate the statistical effects of random fabrication errors on a helix traveling wave tube amplifier's small signal characteristics. The small signal theory is treated in a continuum model in which the electron beam is assumed to be monoenergetic, and axially symmetric about the helix axis. Perturbations that vary randomly along the beam axis are introduced in the dimensionless Pierce parameters b, the beam-wave velocity mismatch, C, the gain parameter, and d, the cold tube circuit loss. Our study shows, as expected, that perturbation in b dominates the other two. The extensive numerical data have been confirmed by our analytic theory. They show in particular that the standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C, and d. Simple formulas have been derived which yield the output phase variations in terms of the statistical random manufacturing errors. This work was supported by AFOSR and by ONR.
Optics measurement algorithms and error analysis for the proton energy frontier
NASA Astrophysics Data System (ADS)
Langner, A.; Tomás, R.
2015-03-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
Irradiation setup at the U-120M cyclotron facility
NASA Astrophysics Data System (ADS)
Křížek, F.; Ferencei, J.; Matlocha, T.; Pospíšil, J.; Príbeli, P.; Raskina, V.; Isakov, A.; Štursa, J.; Vaňát, T.; Vysoká, K.
2018-06-01
This paper describes parameters of the proton beams provided by the U-120M cyclotron and the related irradiation setup at the open access irradiation facility at the Nuclear Physics Institute of the Czech Academy of Sciences. The facility is suitable for testing radiation hardness of various electronic components. The use of the setup is illustrated by a measurement of an error rate for errors caused by Single Event Transients in an SRAM-based Xilinx XC3S200 FPGA. This measurement provides an estimate of a possible occurrence of Single Event Transients. Data suggest that the variation of error rate of the Single Event Effects for different clock phase shifts is not significant enough to use clock phase alignment with the beam as a fault mitigation technique.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
A chevron beam-splitter interferometer
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.
1979-01-01
Fully tilt compensated double-pass chevron beam splitter, that removes channelling effects and permits optical phase tuning, is wavelength independent and allows small errors in alignment that are not tolerated in Michelson, Machzender, or Sagnac interferometers. Device is very useful in experiments where background vibration affects conventional interferometers.
Alternative stitching method for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Tranquillin, Céline; Wieland, Marco; Bayle, Sébastien; Milléquant, Matthieu; Renault, Guillaume
2015-03-01
In this study a novel stitching method other than Soft Edge (SE) and Smart Boundary (SB) is introduced and benchmarked against SE. The method is based on locally enhanced Exposure Latitude without cost of throughput, making use of the fact that the two beams that pass through the stitching region can deposit up to 2x the nominal dose. The method requires a complex Proximity Effect Correction that takes a preset stitching dose profile into account. On a Metal clip at minimum half-pitch of 32 nm for MAPPER FLX 1200 tool specifications, the novel stitching method effectively mitigates Beam to Beam (B2B) position errors such that they do not induce increase in CD Uniformity (CDU). In other words, the same CDU can be realized inside the stitching region as outside the stitching region. For the SE method, the CDU inside is 0.3 nm higher than outside the stitching region. 5 nm direct overlay impact from B2B position errors cannot be reduced by a stitching strategy.
Doppler Global Velocimetry at NASA Glenn Research Center: System Discussion and Results
NASA Technical Reports Server (NTRS)
Lant, Christian T.
2003-01-01
A ruggedized Doppler Global Velocimetry system has been built and tested at NASA Glenn Research Center. One component of planar velocity measurements of subsonic and supersonic flows from an under-expanded free jet are reported, which agree well with predicted values. An error analysis evaluates geometric and spectral error terms, and characterizes speckle noise in isotropic data. A multimode, fused fiber optic bundle is demonstrated to couple up to 650 mJ/pulse of laser light without burning or fiber ablation, and without evidence of Stimulated Brillouin Scattering or other spectral-broadening problems. Comparisons are made between spinning wheel data using illumination by freespace beam propagation and fiber optic beam delivery. The fiber bundle illumination is found to provide more spatially even and stable illumination than is typically available from pulsed Nd:YAG laser beams. The fiber bundle beam delivery is also a step toward making remote measurements and automatic real-time plume sectioning feasible in wind tunnel environments.
Electron beams scanning: A novel method
NASA Astrophysics Data System (ADS)
Askarbioki, M.; Zarandi, M. B.; Khakshournia, S.; Shirmardi, S. P.; Sharifian, M.
2018-06-01
In this research, a spatial electron beam scanning is reported. There are various methods for ion and electron beam scanning. The best known of these methods is the wire scanning wherein the parameters of beam are measured by one or more conductive wires. This article suggests a novel method for e-beam scanning without the previous errors of old wire scanning. In this method, the techniques of atomic physics are applied so that a knife edge has a scanner role and the wires have detector roles. It will determine the 2D e-beam profile readily when the positions of the scanner and detectors are specified.
Measurement of the cosmic ray spectrum and chemical composition in the 1015-1018 eV energy range
NASA Astrophysics Data System (ADS)
Chiavassa, Andrea
2018-01-01
Cosmic ray in the 1015-1018 eV energy range can only be detected with ground based experiments, sampling Extensive Air Showers (EAS) particles. The interest in this energetic interval is related to the search of the knee of the iron component of cosmic ray and to the study of the transition between galactic and extra-galactic primaries. The energy and mass calibration of these arrays can only be performed with complete EAS simulations as no sources are available for an absolute calibration. The systematic error on the energy assignment can be estimated around 30 ± 10%. The all particle spectrum measured in this energy range is more structured than previously thought, showing some faint features: a hardening slightly above 1016 eV and a steepening below 1017 eV. The studies of the primary chemical composition are quickly evolving towards the measurements of the primary spectra of different mass groups: up to now we are able to separate (on a event by event basis) light and heavy primaries. Above the knee a steepening of the heavy primary spectrum and a hardening of the light ones have been detected.
Welding and brazing of nickel and nickel-base alloys
NASA Technical Reports Server (NTRS)
Mortland, J. E.; Evans, R. M.; Monroe, R. E.
1972-01-01
The joining of four types of nickel-base materials is described: (1) high-nickel, nonheat-treatable alloys, (2) solid-solution-hardening nickel-base alloys, (3) precipitation-hardening nickel-base alloys, and (4) dispersion-hardening nickel-base alloys. The high-nickel and solid-solution-hardening alloys are widely used in chemical containers and piping. These materials have excellent resistance to corrosion and oxidation, and retain useful strength at elevated temperatures. The precipitation-hardening alloys have good properties at elevated temperature. They are important in many aerospace applications. Dispersion-hardening nickel also is used for elevated-temperature service.
Constitutive Modeling of High-Temperature Flow Behavior of an Nb Micro-alloyed Hot Stamping Steel
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Feng, Ding; Huang, Yunhua; Wei, Shizhong; Mohrbacher, Hardy; Zhang, Yue
2016-03-01
The thermal deformation behavior and constitutive models of an Nb micro-alloyed 22MnB5 steel were investigated by conducting isothermal uniaxial tensile tests at the temperature range of 873-1223 K with strain rates of 0.1-10 s-1. The results indicated that the investigated steel showed typical work hardening and dynamic recovery behavior during hot deformation, and the flow stress decreased with a decrease in strain rate and/or an increase in temperature. On the basis of the experimental data, the modified Johnson-Cook (modified JC), modified Norton-Hoff (modified NH), and Arrhenius-type (AT) constitutive models were established for the subject steel. However, the flow stress values predicted by these three models revealed some remarkable deviations from the experimental values for certain experimental conditions. Therefore, a new combined modified Norton-Hoff and Arrhenius-type constitutive model (combined modified NH-AT model), which accurately reflected both the work hardening and dynamic recovery behavior of the subject steel, was developed by introducing the modified parameter k ɛ. Furthermore, the accuracy of these constitutive models was assessed by the correlation coefficient, the average absolute relative error, and the root mean square error, which indicated that the flow stress values computed by the combined modified NH-AT model were highly consistent with the experimental values (R = 0.998, AARE = 1.63%, RMSE = 3.85 MPa). The result confirmed that the combined modified NH-AT model was suitable for the studied Nb micro-alloyed hot stamping steel. Additionally, the practicability of the new model was also verified using finite element simulations in ANSYS/LS-DYNA, and the results confirmed that the new model was practical and highly accurate.
SU-E-T-439: Fundamental Verification of Respiratory-Gated Spot Scanning Proton Beam Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamano, H; Yamakawa, T; Hayashi, N
Purpose: The spot-scanning proton beam irradiation with respiratory gating technique provides quite well dose distribution and requires both dosimetric and geometric verification prior to clinical implementation. The purpose of this study is to evaluate the impact of gating irradiation as a fundamental verification. Methods: We evaluated field width, flatness, symmetry, and penumbra in the gated and non-gated proton beams. The respiration motion was distinguished into 3 patterns: 10, 20, and 30 mm. We compared these contents between the gated and non-gated beams. A 200 MeV proton beam from PROBEAT-III unit (Hitachi Co.Ltd) was used in this study. Respiratory gating irradiationmore » was performed by Quasar phantom (MODUS medical devices) with a combination of dedicated respiratory gating system (ANZAI Medical Corporation). For radiochromic film dosimetry, the calibration curve was created with Gafchromic EBT3 film (Ashland) on FilmQA Pro 2014 (Ashland) as film analysis software. Results: The film was calibrated at the middle of spread out Bragg peak in passive proton beam. The field width, flatness and penumbra in non-gated proton irradiation with respiratory motion were larger than those of reference beam without respiratory motion: the maximum errors of the field width, flatness and penumbra in respiratory motion of 30 mm were 1.75% and 40.3% and 39.7%, respectively. The errors of flatness and penumbra in gating beam (motion: 30 mm, gating rate: 25%) were 0.0% and 2.91%, respectively. The results of symmetry in all proton beams with gating technique were within 0.6%. Conclusion: The field width, flatness, symmetry and penumbra were improved with the gating technique in proton beam. The spot scanning proton beam with gating technique is feasible for the motioned target.« less
Feedback stabilization system for pulsed single longitudinal mode tunable lasers
Esherick, Peter; Raymond, Thomas D.
1991-10-01
A feedback stabilization system for pulse single longitudinal mode tunable lasers having an excited laser medium contained within an adjustable length cavity and producing a laser beam through the use of an internal dispersive element, including detection of angular deviation in the output laser beam resulting from detuning between the cavity mode frequency and the passband of the internal dispersive element, and generating an error signal based thereon. The error signal can be integrated and amplified and then applied as a correcting signal to a piezoelectric transducer mounted on a mirror of the laser cavity for controlling the cavity length.
NASA Astrophysics Data System (ADS)
Peckerar, Martin C.; Marrian, Christie R.
1995-05-01
Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.
Alsmadi, A M; Alatas, A; Zhao, J Y; Hu, M Y; Yan, L; Alp, E E
2014-05-01
Synchrotron radiation from third-generation high-brilliance storage rings is an ideal source for X-ray microbeams. The aim of this paper is to describe a microfocusing scheme that combines both a toroidal mirror and Kirkpatrick-Baez (KB) mirrors for upgrading the existing optical system for inelastic X-ray scattering experiments at sector 3 of the Advanced Photon Source. SHADOW ray-tracing simulations without considering slope errors of both the toroidal mirror and KB mirrors show that this combination can provide a beam size of 4.5 µm (H) × 0.6 µm (V) (FWHM) at the end of the existing D-station (66 m from the source) with use of full beam transmission of up to 59%, and a beam size of 3.7 µm (H) × 0.46 µm (V) (FWHM) at the front-end of the proposed E-station (68 m from the source) with a transmission of up to 52%. A beam size of about 5 µm (H) × 1 µm (V) can be obtained, which is close to the ideal case, by using high-quality mirrors (with slope errors of less than 0.5 µrad r.m.s.). Considering the slope errors of the existing toroidal and KB mirrors (5 and 2.9 µrad r.m.s., respectively), the beam size grows to about 13.5 µm (H) × 6.3 µm (V) at the end of the D-station and to 12.0 µm (H) × 6.0 µm (V) at the front-end of the proposed E-station. The simulations presented here are compared with the experimental measurements that are significantly larger than the theoretical values even when slope error is included in the simulations. This is because of the experimental set-up that could not yet be optimized.
Wooten, H. Omar; Green, Olga; Li, Harold H.; Liu, Shi; Li, Xiaoling; Rodriguez, Vivian; Mutic, Sasa; Kashani, Rojano
2016-01-01
The aims of this study were to develop a method for automatic and immediate verification of treatment delivery after each treatment fraction in order to detect and correct errors, and to develop a comprehensive daily report which includes delivery verification results, daily image‐guided radiation therapy (IGRT) review, and information for weekly physics reviews. After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a commercial MRI‐guided radiotherapy treatment machine, we designed a procedure to use 1) treatment plan files, 2) delivery log files, and 3) beam output information to verify the accuracy and completeness of each daily treatment delivery. The procedure verifies the correctness of delivered treatment plan parameters including beams, beam segments and, for each segment, the beam‐on time and MLC leaf positions. For each beam, composite primary fluence maps are calculated from the MLC leaf positions and segment beam‐on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. A daily treatment delivery report is designed to include all required information for IGRT and weekly physics reviews including the plan and treatment fraction information, daily beam output information, and the treatment delivery verification results. A computer program was developed to implement the proposed procedure of the automatic delivery verification and daily report generation for an MRI guided radiation therapy system. The program was clinically commissioned. Sensitivity was measured with simulated errors. The final version has been integrated into the commercial version of the treatment delivery system. The method automatically verifies the EBRT treatment deliveries and generates the daily treatment reports. Already in clinical use for over one year, it is useful to facilitate delivery error detection, and to expedite physician daily IGRT review and physicist weekly chart review. PACS number(s): 87.55.km PMID:27167269
Catching errors with patient-specific pretreatment machine log file analysis.
Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa
2013-01-01
A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rojacz, H., E-mail: rojacz@ac2t.at
2016-08-15
Strain hardening is commonly used to reach the full potential of materials and can be beneficial in tribological contacts. 2-body abrasive wear was simulated in a scratch test, aimed at strain hardening effects in various steels. Different working conditions were examined at various temperatures and velocities. Strain hardening effects and microstructural changes were analysed with high resolution scanning electron microscopy (HRSEM), electron backscatter diffraction (EBSD), micro hardness measurements and nanoindentation. Statistical analysing was performed quantifying the influence of different parameters on microstructures. Results show a crucial influence of temperature and velocity on the strain hardening in tribological contacts. Increased velocitymore » leads to higher deformed microstructures and higher increased surface hardness at a lower depth of the deformed zones at all materials investigated. An optimised surface hardness can be achieved knowing the influence of velocity (strain rate) and temperature for a “tailor-made” surface hardening in tribological systems aimed at increased wear resistance. - Highlights: •Hardening mechanisms and their intensity in tribological contacts are dependent on relative velocity and temperature. •Beneficial surface hardened zones are formed at certain running-in conditions; the scientific background is presented here. •Ferritic-pearlitic steels strain hardens via grain size reduction and decreasing interlamellar distances in pearlite. •Austenitic steels show excellent surface hardening (120% hardness increase) by twinning and martensitic transformation. •Ferritic steels with hard phases harden in the ferrite phase as per Hall-Petch equation and degree of deformation.« less
SU-F-T-434: Development of a Fan-Beam Optical Scanner Using CMOS Array for Small Field Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, E; Warmington, L; Watanabe, Y
Purpose: To design and construct a second generation optical computed tomography (OCT) system using a fan-beam with a CMOS array detector for the 3D dosimetry with polymer gel and radiochromic solid dosimeters. The system was specifically designed for the small field dosimetry. Methods: The optical scanner used a fan-beam laser, which was produced from a collimated red laser beam (λ=620 nm) with a 15-degree laser-line generating lens. The fan-beam was sent through an index-matching bath which holds the sample stage and a sample. The emerging laser light was detected with a 2.54 cm-long CMOS array detector (512 elements). The samplemore » stage rotated through the full 360 degree projection angles at 0.9-degree increments. Each projection was normalized to the unirradiated sample at the projection angle to correct for imperfections in the dosimeter. A larger sample could be scanned by using a motorized mirror and linearly translating the CMOS detector. The height of the sample stage was varied for a full 3D scanning. The image acquisition and motor motion was controlled by a computer. The 3D image reconstruction was accomplished by a fan-beam reconstruction algorithm. All the software was developed inhouse with MATLAB. Results: The scanner was used on both PRESAGE and PAGAT gel dosimeters. Irreconcilable refraction errors were seen with PAGAT because the fan beam laser line refracted away from the detector when the field was highly varying in 3D. With PRESAGE, this type of error was not seen. Conclusion: We could acquire tomographic images of dose distributions by the new OCT system with both polymer gel and radiochromic solid dosimeters. Preliminary results showed that the system was more suited for radiochromic solid dosimeters since the radiochromic dosimeters exhibited minimal refraction and scattering errors. We are currently working on improving the image quality by thorough characterization of the OCT system.« less
Heidari, Mohammad; Heidari, Ali; Homaei, Hadi
2014-01-01
The static pull-in instability of beam-type microelectromechanical systems (MEMS) is theoretically investigated. Two engineering cases including cantilever and double cantilever microbeam are considered. Considering the midplane stretching as the source of the nonlinearity in the beam behavior, a nonlinear size-dependent Euler-Bernoulli beam model is used based on a modified couple stress theory, capable of capturing the size effect. By selecting a range of geometric parameters such as beam lengths, width, thickness, gaps, and size effect, we identify the static pull-in instability voltage. A MAPLE package is employed to solve the nonlinear differential governing equations to obtain the static pull-in instability voltage of microbeams. Radial basis function artificial neural network with two functions has been used for modeling the static pull-in instability of microcantilever beam. The network has four inputs of length, width, gap, and the ratio of height to scale parameter of beam as the independent process variables, and the output is static pull-in voltage of microbeam. Numerical data, employed for training the network, and capabilities of the model have been verified in predicting the pull-in instability behavior. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 4.55% in predicting pull-in voltage of cantilever microbeam. Further analysis of pull-in instability of beam under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach. The results reveal significant influences of size effect and geometric parameters on the static pull-in instability voltage of MEMS. PMID:24860602
Topographical optimization of structures for use in musical instruments and other applications
NASA Astrophysics Data System (ADS)
Kirkland, William Brandon
Mallet percussion instruments such as the xylophone, marimba, and vibraphone have been produced and tuned since their inception by arduously grinding the keys to achieve harmonic ratios between their 1st, 2 nd, and 3rd transverse modes. In consideration of this, it would be preferable to have defined mathematical models such that the keys of these instruments can be produced quickly and reliably. Additionally, physical modeling of these keys or beams provides a useful application of non-uniform beam vibrations as studied by Euler-Bernoulli and Timoshenko beam theories. This thesis work presents a literature review of previous studies regarding mallet percussion instrument design and optimization of non-uniform keys. The progression of previous research from strictly mathematical approaches to finite element methods is shown, ultimately arriving at the most current optimization techniques used by other authors. However, previous research varies slightly in the relative degree of accuracy to which a non-uniform beam can be modeled. Typically, accuracies are shown in literature as 1% to 2% error. While this seems attractive, musical tolerances require 0.25% error and beams are otherwise unsuitable. This research seeks to build on and add to the previous field research by optimizing beam topology and machining keys within tolerances that no further tuning is required. The optimization methods relied on finite element analysis and used harmonic modal frequencies as constraints rather than arguments of an error function to be optimized. Instead, the beam mass was minimized while the modal frequency constraints were required to be satisfied within 0.25% tolerance. The final optimized and machined keys of an A4 vibraphone were shown to be accurate within the required musical tolerances, with strong resonance at the designed frequencies. The findings solidify a systematic method for designing musical structures for accuracy and repeatability upon manufacture.
Infrared tracker for a portable missile launcher
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.J.
1993-07-13
An infrared beam tracker is described for arrangement to a housing that is unitary with a portable missile launcher, comprising: a rotating beam splitter positioned to intercept the infrared beam passing a first portion of the beam through the beam splitter along a first direction and reflecting the remaining portion along a different direction; a first infrared detector for receiving the beam reflected portion from the beam splitter and produce electric signals responsive thereto; a second infrared detector for receiving the beam portion that passes through the beam splitter and providing electric signals responsive thereto; and means interconnected to themore » first and second infrared detectors and responsive to the electric signals generated by said detectors for determining errors in missile flight direction and communicating course correction information to the missile.« less
Dedicated Cone-Beam CT System for Extremity Imaging
Al Muhit, Abdullah; Zbijewski, Wojciech; Thawait, Gaurav K.; Stayman, J. Webster; Packard, Nathan; Senn, Robert; Yang, Dong; Foos, David H.; Yorkston, John; Siewerdsen, Jeffrey H.
2014-01-01
Purpose To provide initial assessment of image quality and dose for a cone-beam computed tomographic (CT) scanner dedicated to extremity imaging. Materials and Methods A prototype cone-beam CT scanner has been developed for imaging the extremities, including the weight-bearing lower extremities. Initial technical assessment included evaluation of radiation dose measured as a function of kilovolt peak and tube output (in milliampere seconds), contrast resolution assessed in terms of the signal difference–to-noise ratio (SDNR), spatial resolution semiquantitatively assessed by using a line-pair module from a phantom, and qualitative evaluation of cadaver images for potential diagnostic value and image artifacts by an expert CT observer (musculoskeletal radiologist). Results The dose for a nominal scan protocol (80 kVp, 108 mAs) was 9 mGy (absolute dose measured at the center of a CT dose index phantom). SDNR was maximized with the 80-kVp scan technique, and contrast resolution was sufficient for visualization of muscle, fat, ligaments and/or tendons, cartilage joint space, and bone. Spatial resolution in the axial plane exceeded 15 line pairs per centimeter. Streaks associated with x-ray scatter (in thicker regions of the patient—eg, the knee), beam hardening (about cortical bone—eg, the femoral shaft), and cone-beam artifacts (at joint space surfaces oriented along the scanning plane—eg, the interphalangeal joints) presented a slight impediment to visualization. Cadaver images (elbow, hand, knee, and foot) demonstrated excellent visibility of bone detail and good soft-tissue visibility suitable to a broad spectrum of musculoskeletal indications. Conclusion A dedicated extremity cone-beam CT scanner capable of imaging upper and lower extremities (including weight-bearing examinations) provides sufficient image quality and favorable dose characteristics to warrant further evaluation for clinical use. © RSNA, 2013 Online supplemental material is available for this article. PMID:24475803
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvador Palau, A.; Eder, S. D., E-mail: sabrina.eder@uib.no; Kaltenbacher, T.
Time-of-flight (TOF) is a standard experimental technique for determining, among others, the speed ratio S (velocity spread) of a molecular beam. The speed ratio is a measure for the monochromaticity of the beam and an accurate determination of S is crucial for various applications, for example, for characterising chromatic aberrations in focussing experiments related to helium microscopy or for precise measurements of surface phonons and surface structures in molecular beam scattering experiments. For both of these applications, it is desirable to have as high a speed ratio as possible. Molecular beam TOF measurements are typically performed by chopping the beammore » using a rotating chopper with one or more slit openings. The TOF spectra are evaluated using a standard deconvolution method. However, for higher speed ratios, this method is very sensitive to errors related to the determination of the slit width and the beam diameter. The exact sensitivity depends on the beam diameter, the number of slits, the chopper radius, and the chopper rotation frequency. We present a modified method suitable for the evaluation of TOF measurements of high speed ratio beams. The modified method is based on a systematic variation of the chopper convolution parameters so that a set of independent measurements that can be fitted with an appropriate function are obtained. We show that with this modified method, it is possible to reduce the error by typically one order of magnitude compared to the standard method.« less
Optimized radiation-hardened erbium doped fiber amplifiers for long space missions
NASA Astrophysics Data System (ADS)
Ladaci, A.; Girard, S.; Mescia, L.; Robin, T.; Laurent, A.; Cadier, B.; Boutillier, M.; Ouerdane, Y.; Boukenter, A.
2017-04-01
In this work, we developed and exploited simulation tools to optimize the performances of rare earth doped fiber amplifiers (REDFAs) for space missions. To describe these systems, a state-of-the-art model based on the rate equations and the particle swarm optimization technique is developed in which we also consider the main radiation effect on REDFA: the radiation induced attenuation (RIA). After the validation of this tool set by confrontation between theoretical and experimental results, we investigate how the deleterious radiation effects on the amplifier performance can be mitigated following adequate strategies to conceive the REDFA architecture. The tool set was validated by comparing the calculated Erbium-doped fiber amplifier (EDFA) gain degradation under X-rays at ˜300 krad(SiO2) with the corresponding experimental results. Two versions of the same fibers were used in this work, a standard optical fiber and a radiation hardened fiber, obtained by loading the previous fiber with hydrogen gas. Based on these fibers, standard and radiation hardened EDFAs were manufactured and tested in different operating configurations, and the obtained data were compared with simulation data done considering the same EDFA structure and fiber properties. This comparison reveals a good agreement between simulated gain and experimental data (<10% as the maximum error for the highest doses). Compared to our previous results obtained on Er/Yb-amplifiers, these results reveal the importance of the photo-bleaching mechanism competing with the RIA that cannot be neglected for the modeling of the radiation-induced gain degradation of EDFAs. This implies to measure in representative conditions the RIA at the pump and signal wavelengths that are used as input parameters for the simulation. The validated numerical codes have then been used to evaluate the potential of some EDFA architecture evolutions in the amplifier performance during the space mission. Optimization of both the fiber length and the EDFA pumping scheme allows us to strongly reduce its radiation vulnerability in terms of gain. The presented approach is a complementary and effective tool for hardening by device techniques and opens new perspectives for the applications of REDFAs and lasers in harsh environments.
NASA Technical Reports Server (NTRS)
Davidson, Frederic M.
1992-01-01
Performance measurements are reported concerning a coherent optical communication receiver that contained an iron doped indium phosphide photorefractive beam combiner, rather than a conventional optical beam splitter. The system obtained a bit error probability of 10(exp -6) at received signal powers corresponding to less than 100 detected photons per bit. The system used phase modulated Nd:YAG laser light at a wavelength of 1.06 microns.
Errors and optics study of a permanent magnet quadrupole system
NASA Astrophysics Data System (ADS)
Schillaci, F.; Maggiore, M.; Rifuggiato, D.; Cirrone, G. A. P.; Cuttone, G.; Giove, D.
2015-05-01
Laser-based accelerators are gaining interest in recent years as an alternative to conventional machines [1]. Nowadays, energy and angular spread of the laser-driven beams are the main issues in application and different solutions for dedicated beam-transport lines have been proposed [2,3]. In this context a system of permanent magnet quadrupoles (PMQs) is going to be realized by INFN [2] researchers, in collaboration with SIGMAPHI [3] company in France, to be used as a collection and pre-selection system for laser driven proton beams. The definition of well specified characteristics, both in terms of performances and field quality, of the magnetic lenses is crucial for the system realization, for an accurate study of the beam dynamics and the proper matching with a magnetic selection system already realized [6,7]. Hence, different series of simulations have been used for studying the PMQs harmonic contents and stating the mechanical and magnetic tolerances in order to have reasonable good beam quality downstream the system. In this paper is reported the method used for the analysis of the PMQs errors and its validation. Also a preliminary optics characterization is presented in which are compared the effects of an ideal PMQs system with a perturbed system on a monochromatic proton beams.
Structural heredity influence upon principles of strain wave hardening
NASA Astrophysics Data System (ADS)
Kiricheck, A. V.; Barinov, S. V.; Yashin, A. V.
2017-02-01
It was established experimentally that by penetration of a strain wave through material hardened not only the technological modes of processing, but also a technological heredity - the direction of the fibers of the original macrostructure have an influence upon the diagram of microhardness. By penetration of the strain wave along fibers, the degree of hardening the material is less, however, a product is hardened throughout its entire section mainly along fibers. In the direction of the strain waves across fibers of the original structure of material, the degree of material hardening is much higher, the depth of the hardened layer with the degree of hardening not less than 50% makes at least 3 mm. It was found that under certain conditions the strain wave can completely change the original structure of the material. Thus, a heterogeneously hardened structure characterized by the interchange of harder and more viscous areas is formed, which is beneficial for assurance of high operational properties of material.
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
Effects of energy chirp on bunch length measurement in linear accelerator beams
NASA Astrophysics Data System (ADS)
Sabato, L.; Arpaia, P.; Giribono, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Vaccarezza, C.; Variola, A.
2017-08-01
The effects of assumptions about bunch properties on the accuracy of the measurement method of the bunch length based on radio frequency deflectors (RFDs) in electron linear accelerators (LINACs) are investigated. In particular, when the electron bunch at the RFD has a non-negligible energy chirp (i.e. a correlation between the longitudinal positions and energies of the particle), the measurement is affected by a deterministic intrinsic error, which is directly related to the RFD phase offset. A case study on this effect in the electron LINAC of a gamma beam source at the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) is reported. The relative error is estimated by using an electron generation and tracking (ELEGANT) code to define the reference measurements of the bunch length. The relative error is proved to increase linearly with the RFD phase offset. In particular, for an offset of {{7}\\circ} , corresponding to a vertical centroid offset at a screen of about 1 mm, the relative error is 4.5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachman, Daniel; Chen, Zhijiang; Wang, Christopher
Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less
Kourtis, Lampros C; Carter, Dennis R; Beaupre, Gary S
2014-08-01
Three-point bending tests are often used to determine the apparent or effective elastic modulus of long bones. The use of beam theory equations to interpret such tests can result in a substantial underestimation of the true effective modulus. In this study three-dimensional, nonlinear finite element analysis is used to quantify the errors inherent in beam theory and to create plots that can be used to correct the elastic modulus calculated from beam theory. Correction plots are generated for long bones representative of a variety of species commonly used in research studies. For a long bone with dimensions comparable to the mouse femur, the majority of the error in the effective elastic modulus results from deformations to the bone cross section that are not accounted for in the equations from beam theory. In some cases, the effective modulus calculated from beam theory can be less than one-third of the true effective modulus. Errors are larger: (1) for bones having short spans relative to bone length; (2) for bones with thin vs. thick cortices relative to periosteal diameter; and (3) when using a small radius or "knife-edge" geometry for the center loading ram and the outer supports in the three-point testing system. The use of these correction plots will enable researchers to compare results for long bones from different animal strains and to compare results obtained using testing systems that differ with regard to length between the outer supports and the radius used for the loading ram and outer supports.
Materials science. Modeling strain hardening the hard way.
Gumbsch, Peter
2003-09-26
The plastic deformation of metals results in strain hardening, that is, an increase in the stress with increasing strain. Materials engineers can provide a simple approximate description of such deformation and hardening behavior. In his perspective, Gumbsch discusses work by Madec et al. who have undertaken the formidable task of computing the physical basis for the development of strain hardening by individually following the fate of all the dislocations involved. Their simulations show that the collinear dislocation interaction makes a substantial contribution to strain hardening. It is likely that such simulations will play an important role in guiding the development of future engineering descriptions of deformation and hardening.
NASA Astrophysics Data System (ADS)
Xiao, Xiazi; Yu, Long
2018-05-01
Linear and square superposition hardening models are compared for the surface nanoindentation of ion-irradiated materials. Hardening mechanisms of both dislocations and defects within the plasticity affected region (PAR) are considered. Four sets of experimental data for ion-irradiated materials are adopted to compare with theoretical results of the two hardening models. It is indicated that both models describe experimental data equally well when the PAR is within the irradiated layer; whereas, when the PAR is beyond the irradiated region, the square superposition hardening model performs better. Therefore, the square superposition model is recommended to characterize the hardening behavior of ion-irradiated materials.
Efficient machining of ultra precise steel moulds with freeform surfaces
NASA Astrophysics Data System (ADS)
Bulla, B.; Robertson, D. J.; Dambon, O.; Klocke, F.
2013-09-01
Ultra precision diamond turning of hardened steel to produce optical quality surfaces can be realized by applying an ultrasonic assisted process. With this technology optical moulds used typically for injection moulding can be machined directly from steel without the requirement to overcoat the mould with a diamond machinable material such as Nickel Phosphor. This has both the advantage of increasing the mould tool lifetime and also reducing manufacture costs by dispensing with the relatively expensive plating process. This publication will present results we have obtained for generating free form moulds in hardened steel by means of ultrasonic assisted diamond turning with a vibration frequency of 80 kHz. To provide a baseline with which to characterize the system performance we perform plane cutting experiments on different steel alloys with different compositions. The baseline machining results provides us information on the surface roughness and on tool wear caused during machining and we relate these to material composition. Moving on to freeform surfaces, we will present a theoretical background to define the machine program parameters for generating free forms by applying slow slide servo machining techniques. A solution for optimal part generation is introduced which forms the basis for the freeform machining experiments. The entire process chain, from the raw material through to ultra precision machining is presented, with emphasis on maintaining surface alignment when moving a component from CNC pre-machining to final machining using ultrasonic assisted diamond turning. The free form moulds are qualified on the basis of the surface roughness measurements and a form error map comparing the machined surface with the originally defined surface. These experiments demonstrate the feasibility of efficient free form machining applying ultrasonic assisted diamond turning of hardened steel.
Superhard Nanocrystalline Homometallic Stainless Steel on Steel for Seamless Coatings
NASA Technical Reports Server (NTRS)
Tobin, Eric J.; Hafley, R. (Technical Monitor)
2002-01-01
The objective of this work is to deposit nanocrystalline stainless steel onto steel substrates (homometallic) for enhanced wear and corrosion resistance. Homometallic coatings provide superior adhesion, and it has been shown that ultrafine-grained materials exhibit the increased hardness and decreased permeability desired for protective coatings. Nanocrystals will be produced by controlling nucleation and growth and use of an ion beam during deposition by e-beam evaporation or sputtering. Phase I is depositing 31 6L nanocrystalline stainless steel onto 31 6L stainless steel substrates. These coatings exhibit hardnesses comparable to those normally obtained for ceramic coatings such ZrO2, and possess the superior adhesion of seamless, homometallic coatings. Hardening the surface with a similar material also enhances adhesion, by avoiding problems associated with thermal and lattice mismatch. So far we have deposited nanocrystalline homometallic 316L stainless steel coatings by varying the ions and the current density of the ion beams. For all deposition conditions we have produced smooth, uniform, superhard coatings. All coatings exhibit hardness of at least 200% harder than that of bulk materials. Our measurements indicate that there is a direct relationship between nanohardness and the current density of the ion beam. Stress measurements indicate that stress in the films is increasingly proportional to current density of the ion beam. TEM, XPS, and XRD results indicate that the coated layers consist of FCC structure nanocrystallites with a dimension of about 10 to 20 nm. The Ni and Mo concentration of these coating are lower than those of bulk 316L but the concentration of Cr is higher.
Effect of shot peening on the microstructure of laser hardened 17-4PH
NASA Astrophysics Data System (ADS)
Wang, Zhou; Jiang, Chuanhai; Gan, Xiaoyan; Chen, Yanhua
2010-12-01
In order to investigate the influence of shot peening on microstructure of laser hardened steel and clarify how much influence of initial microstructure induced by laser hardening treatment on final microstructure of laser hardened steel after shot peening treatment, measurements of retained austenite, measurements of microhardness and microstructural analysis were carried out on three typical areas including laser hardened area, transitional area and matrix area of laser hardened 17-4PH steel. The results showed that shot peening was an efficient cold working method to eliminate the retained austenite on the surface of laser hardened samples. The surface hardness increased dramatically when shot peening treatments were carried out. The analyses of microstructure of laser hardened 17-4PH after shot peening treatment were carried out in matrix area and laser hardened area via Voigt method. With the increasing peening intensity, the influence depth of shot peening on hardness and microstructure increased but the surface hardness and microstructure did not change when certain peening intensity was reached. Influence depth of shot peening on hardness was larger than influence depth of shot peening on microstructure due to the kinetic energy loss along the depth during shot peening treatment. From the microstructural result, it can be shown that the shot peening treatment can influence the domain size and microstrain of treated samples but laser hardening treatment can only influence the microstrain of treated samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less
TH-B-BRC-01: How to Identify and Resolve Potential Clinical Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, I.
2016-06-15
Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, S; Currier, B; Hodgdon, A
Purpose: The design of a new Portable Faraday Cup (PFC) used to calibrate proton accelerators was evaluated for energies between 50 and 220 MeV. Monte Carlo simulations performed in Geant4–10.0 were used to evaluate experimental results and reduce the relative detector error for this vacuum-less and low mass system, and invalidate current MCNP releases. Methods: The detector construction consisted of a copper conductor coated with an insulator and grounded with silver. Monte Carlo calculations in Geant4 were used to determine the net charge per proton input (gain) as a function of insulator thickness and beam energy. Kapton was chosen asmore » the insulating material and was designed to capture backscattered electrons. Charge displacement from/into Kapton was assumed to follow a linear proportionality to the origin/terminus depth toward the outer ground layer. Kapton thicknesses ranged from 0 to 200 microns, proton energies were set to match empirical studies ranging from 70 to 250 MeV. Each setup was averaged over 1 million events using the FTFP-BERT 2.0 physics list. Results: With increasing proton energy, the gain of Cu+KA gradually converges to the limit of pure copper, with relative error between 1.52% and 0.72%. The Ag layer created a more diverging behavior, accelerating the flux of negative charge into the device and increasing relative error when compared to pure copper from 1.21% to 1.63%. Conclusion: Gain vs. beam energy signatures were acquired for each device. Further analysis reveals proportionality between insulator thickness and measured gain, albeit an inverse proportionality between beam energy and in-flux of electrons. Increased silver grounding layer thickness also decreases gain, though the relative error expands with beam energy, contrary to the Kapton layer.« less
Magnetic field errors tolerances of Nuclotron booster
NASA Astrophysics Data System (ADS)
Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet
2018-04-01
Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
Finite element modeling of light propagation in fruit under illumination of continuous-wave beam
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
Analysis of errors detected in external beam audit dosimetry program at Mexican radiotherapy centers
NASA Astrophysics Data System (ADS)
Álvarez-Romero, José T.; Tovar-Muñoz, Víctor M.
2012-10-01
Presented and analyzed are the causes of deviation observed in the pilot postal dosimetry audit program to verify the absorbed dose to water Dw in external beams of teletherapy 60Co and/or linear accelerators in Mexican radiotherapy centers, during the years 2007-2011.
Fully Mechanically Controlled Automated Electron Microscopic Tomography
Liu, Jinxin; Li, Hongchang; Zhang, Lei; ...
2016-07-11
Knowledge of three-dimensional (3D) structures of each individual particles of asymmetric and flexible proteins is essential in understanding those proteins' functions; but their structures are difficult to determine. Electron tomography (ET) provides a tool for imaging a single and unique biological object from a series of tilted angles, but it is challenging to image a single protein for three-dimensional (3D) reconstruction due to the imperfect mechanical control capability of the specimen goniometer under both a medium to high magnification (approximately 50,000-160,000×) and an optimized beam coherence condition. Here, we report a fully mechanical control method for automating ET data acquisitionmore » without using beam tilt/shift processes. This method could reduce the accumulation of beam tilt/shift that used to compensate the error from the mechanical control, but downgraded the beam coherence. Our method was developed by minimizing the error of the target object center during the tilting process through a closed-loop proportional-integral (PI) control algorithm. The validations by both negative staining (NS) and cryo-electron microscopy (cryo-EM) suggest that this method has a comparable capability to other ET methods in tracking target proteins while maintaining optimized beam coherence conditions for imaging.« less
Evaluation of mean velocity and turbulence measurements with ADCPs
Nystrom, E.A.; Rehmann, C.R.; Oberg, K.A.
2007-01-01
To test the ability of acoustic Doppler current profilers (ADCPs) to measure turbulence, profiles measured with two pulse-to-pulse coherent ADCPs in a laboratory flume were compared to profiles measured with an acoustic Doppler velocimeter, and time series measured in the acoustic beam of the ADCPs were examined. A four-beam ADCP was used at a downstream station, while a three-beam ADCP was used at a downstream station and an upstream station. At the downstream station, where the turbulence intensity was low, both ADCPs reproduced the mean velocity profile well away from the flume boundaries; errors near the boundaries were due to transducer ringing, flow disturbance, and sidelobe interference. At the upstream station, where the turbulence intensity was higher, errors in the mean velocity were large. The four-beam ADCP measured the Reynolds stress profile accurately away from the bottom boundary, and these measurements can be used to estimate shear velocity. Estimates of Reynolds stress with a three-beam ADCP and turbulent kinetic energy with both ADCPs cannot be computed without further assumptions, and they are affected by flow inhomogeneity. Neither ADCP measured integral time scales to within 60%. ?? 2007 ASCE.
Aligning the magnetic field of a linear induction accelerator with a low-energy electron beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, J.C.; Deadrick, F.J.; Kallman, J.S.
1989-03-10
The Experimental Test Accelerator II (ETA-II) linear induction accelerator at Lawrence Livermore National Laboratory uses a solenoid magnet in each acceleration cell to focus and transport an electron beam over the length of the accelerator. To control growth of the corkscrew mode the magnetic field must be precisely aligned over the full length of the accelerate. Concentric with each solenoid magnet is sine/cosmic-wound correction coil to steer the beam and correct field errors. A low-energy electron probe traces the central flux line through the accelerator referenced to a mechanical axis that is defined by a copropagating laser beam. Correction coilsmore » are activated to force the central flux line to cross the mechanical axis at the end of each acceleration cell. The ratios of correction coil currents determined by the low-energy electron probe are then kept fixed to correct for field errors during normal operation with an accelerated beam. We describe the construction of the low-energy electron probe and report the results of experiments we conducted to measure magnetic alignment with and without the correction coils activated. 5 refs., 3 figs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laloum, D., E-mail: david.laloum@cea.fr; CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9; STMicroelectronics, 850 rue Jean Monnet, 38926 Crolles
2015-01-15
X-ray tomography is widely used in materials science. However, X-ray scanners are often based on polychromatic radiation that creates artifacts such as dark streaks. We show this artifact is not always due to beam hardening. It may appear when scanning samples with high-Z elements inside a low-Z matrix because of the high-Z element absorption edge: X-rays whose energy is above this edge are strongly absorbed, violating the exponential decay assumption for reconstruction algorithms and generating dark streaks. A method is proposed to limit the absorption edge effect and is applied on a microelectronic case to suppress dark streaks between interconnections.
Plasticity - Theory and finite element applications.
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H. S.
1972-01-01
A unified presentation is given of the development and distinctions associated with various incremental solution procedures used to solve the equations governing the nonlinear behavior of structures, and this is discussed within the framework of the finite-element method. Although the primary emphasis here is on material nonlinearities, consideration is also given to geometric nonlinearities acting separately or in combination with nonlinear material behavior. The methods discussed here are applicable to a broad spectrum of structures, ranging from simple beams to general three-dimensional bodies. The finite-element analysis methods for material nonlinearity are general in the sense that any of the available plasticity theories can be incorporated to treat strain hardening or ideally plastic behavior.
7 CFR 58.622 - Hardening and storage rooms.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Hardening and storage rooms. 58.622 Section 58.622....622 Hardening and storage rooms. Hardening and storage rooms for frozen desserts shall be constructed... insure adequate storage temperature (−10° or lower). Air shall be circulated to maintain uniform...
7 CFR 58.622 - Hardening and storage rooms.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Hardening and storage rooms. 58.622 Section 58.622....622 Hardening and storage rooms. Hardening and storage rooms for frozen desserts shall be constructed... insure adequate storage temperature (−10° or lower). Air shall be circulated to maintain uniform...
7 CFR 58.622 - Hardening and storage rooms.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Hardening and storage rooms. 58.622 Section 58.622....622 Hardening and storage rooms. Hardening and storage rooms for frozen desserts shall be constructed... insure adequate storage temperature (−10° or lower). Air shall be circulated to maintain uniform...
Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad
2017-12-01
In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Anil Kumar, V.; Sukumaran, Arjun; Kumar, Vinod
2018-05-01
Electron beam welding of Ni-20Cr-9Mo-4Nb alloy sheets was carried out, and high-temperature tensile behaviors of base metal and weldments were studied. Tensile properties were evaluated at ambient temperature, at elevated temperatures of 625 °C to 1025 °C, and at strain rates of 0.1 to 0.001 s-1. Microstructure of the weld consisted of columnar dendritic structure and revealed epitaxial mode of solidification. Weld efficiency of 90 pct in terms of strength (UTS) was observed at ambient temperature and up to an elevated temperature of 850 °C. Reduction in strength continued with further increase of test temperature (up to 1025 °C); however, a significant improvement in pct elongation is found up to 775 °C, which was sustained even at higher test temperatures. The tensile behaviors of base metal and weldments were similar at the elevated temperatures at the respective strain rates. Strain hardening exponent `n' of the base metal and weldment was 0.519. Activation energy `Q' of base metal and EB weldments were 420 to 535 kJ mol-1 determined through isothermal tensile tests and 625 to 662 kJ mol-1 through jump-temperature tensile tests. Strain rate sensitivity `m' was low (< 0.119) for the base metal and (< 0.164) for the weldment. The δ phase was revealed in specimens annealed at 700 °C, whereas, twins and fully recrystallized grains were observed in specimens annealed at 1025 °C. Low-angle misorientation and strain localization in the welds and the HAZ during tensile testing at higher temperature and strain rates indicates subgrain formation and recrystallization. Higher elongation in the weldment (at Test temperature > 775 °C) is attributed to the presence of recrystallized grains. Up to 700 °C, the deformation is through slip, where strain hardening is predominant and effect of strain rate is minimal. Between 775 °C to 850 °C, strain hardening is counterbalanced by flow softening, where cavitation limits the deformation (predominantly at lower strain rate). Above 925 °C, flow softening is predominant resulting in a significant reduction in strength. Presence of precipitates/accumulated strain at high strain rate results in high strength, but when the precipitates were coarsened at lower strain rates or precipitates were dissolved at a higher temperature, the result was a reduction in strength. Further, the accumulated strain assisted in recrystallization, which also resulted in a reduction in strength.
1976-07-01
heating to temperatures below the Acl precipitates a copper -rich phase within the martensite increasing hardness and strength. The stress relieving effect...experimental approach varied the heat treatment of two precipitation hardening martensitic alloys , 17-4 PH1 and 15-b PH. Fatigue-crack growth data was...hardenable by precipitation hardening. Alloys that do harden by this mechanism have only one thing in common, this is, a decreasing solubility for one phase
Liskowitz, J.W.; Wecharatana, M.; Jaturapitakkul, C.; Cerkanowicz, A.E.
1997-10-28
The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention provides a method for increasing the rate of strength gain of a hardenable mixture containing fly ash by exposing the fly ash to an aqueous slurry of calcium oxide (lime) prior to its incorporation into the hardenable mixture. The invention further relates to such hardenable mixtures, e.g., concrete and mortar, that contain fly ash pre-reacted with calcium oxide. In particular, the fly ash is added to a slurry of calcium oxide in water, prior to incorporating the fly ash in a hardenable mixture. The hardenable mixture may be concrete or mortar. In a specific embodiment, mortar containing fly ash treated by exposure to an aqueous lime slurry are prepared and tested for compressive strength at early time points. 2 figs.
Liskowitz, John W.; Wecharatana, Methi; Jaturapitakkul, Chai; Cerkanowicz, deceased, Anthony E.
1997-01-01
The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention provides a method for increasing the rate of strength gain of a hardenable mixture containing fly ash by exposing the fly ash to an aqueous slurry of calcium oxide (lime) prior to its incorporation into the hardenable mixture. The invention further relates to such hardenable mixtures, e.g., concrete and mortar, that contain fly ash pre-reacted with calcium oxide. In particular, the fly ash is added to a slurry of calcium oxide in water, prior to incorporating the fly ash in a hardenable mixture. The hardenable mixture may be concrete or mortar. In a specific embodiment, mortar containing fly ash treated by exposure to an aqueous lime slurry are prepared and tested for compressive strength at early time points.
Akino, Yuichi; Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshiichi; Hayashida, Miori; Mabuchi, Nobuhisa; Ogawa, Kazuhiko
2018-06-01
The Synchrony ™ Respiratory Tracking System of the CyberKnife ® Robotic Radiosurgery System (Accuray, Inc., Sunnyvale CA) enables real-time tracking of moving targets such as lung and liver tumors during radiotherapy. Although film measurements have been used for quality assurance of the tracking system, they cannot evaluate the temporal tracking accuracy. We have developed a verification system using a plastic scintillator that can evaluate the temporal accuracy of the CyberKnife Synchrony. A phantom consisting of a U-shaped plastic frame with three fiducial markers was used. The phantom was moved on a plastic scintillator plate. To identify the phantom position on the recording video in darkness, four pieces of fluorescent tape representing the corners of a 10 cm × 10 cm square around an 8 cm × 8 cm window were attached to the phantom. For a stable respiration model, the phantom was moved with the fourth power of a sinusoidal wave with breathing cycles of 4, 3, and 2 s and an amplitude of 1 cm. To simulate irregular breathing, the respiratory cycle was varied with Gaussian random numbers. A virtual target was generated at the center of the fluorescent markers using the MultiPlan ™ treatment planning system. Photon beams were irradiated using a fiducial tracking technique. In a dark room, the fluorescent light of the markers and the scintillation light of the beam position were recorded using a camera. For each video frame, a homography matrix was calculated from the four fluorescent marker positions, and the beam position derived from the scintillation light was corrected. To correct the displacement of the beam position due to oblique irradiation angles and other systematic measurement errors, offset values were derived from measurements with the phantom held stationary. The average SDs of beam position measured without phantom motion were 0.16 mm and 0.20 mm for lateral and longitudinal directions, respectively. For the stable respiration model, the tracking errors (mean ± SD) were 0.40 ± 0.64 mm, -0.07 ± 0.79 mm, and 0.45 ± 1.14 mm for breathing cycles of 4, 3, and 2 s, respectively. The tracking errors showed significant linear correlation with the phantom velocity. The correlation coefficients were 0.897, 0.913, and 0.957 for breathing cycles of 4, 3, and 2 s, respectively. The unstable respiration model also showed linear correlation between tracking errors and phantom velocity. The probability of tracking error incidents increased with decreasing length of the respiratory cycles. Although the tracking error incidents increased with larger variations in respiratory cycle, the effect on the cumulative probability was insignificant. For a respiratory cycle of 4 s, the maximum tracking error was 1.10 mm and 1.43 mm at the probability of 10% and 5%, respectively. Large tracking errors were observed when there was phase shift between the tumor and the LED marker. This technique allows evaluation of the motion tracking accuracy of the Synchrony ™ system over time by measurement of the photon beam. The velocity of the target and phase shift have significant effects on accuracy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique
Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie
2015-01-01
Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510
McColl, G.; Hoffmann, A. A.; McKechnie, S. W.
1996-01-01
To identify genes involved in stress resistance and heat hardening, replicate lines of Drosophila melanogaster were selected for increased resistance to knockdown by a 39° heat stress. Two selective regimes were used, one with and one without prior hardening. Mean knockdown times were increased from ~5 min to >20 min after 18 generations. Initial realized heritabilities were as high as 10% for lines selected without hardening, and crosses between lines indicated simple additive gene effects for the selected phenotypes. To survey allelic variation and correlated selection responses in two candidate stress genes, hsr-omega and hsp68, we applied denaturing gradient gel electrophoresis to amplified DNA sequences from small regions of these genes. After eight generations of selection, allele frequencies at both loci showed correlated responses for selection following hardening, but not without hardening. The hardening process itself was associated with a hsp68 frequency change in the opposite direction to that associated with selection that followed hardening. These stress loci are closely linked on chromosome III, and the hardening selection established a disequilibrium, suggesting an epistatic effect on resistance. The data indicate that molecular variation in both hsr-omega and hsp68 contribute to natural heritable variation for hardened heat resistance. PMID:8844150
Villar-Salvador, Pedro; Planelles, Rosa; Oliet, Juan; Peñuelas-Rubira, Juan L; Jacobs, Douglass F; González, Magdalena
2004-10-01
Drought stress is the main cause of mortality of holm oak (Quercus ilex L.) seedlings in forest plantations. We therefore assessed if drought hardening, applied in the nursery at the end of the growing season, enhanced the drought tolerance and transplanting performance of holm oak seedlings. Seedlings were subjected to three drought hardening intensities (low, moderate and severe) for 2.5 and 3.5 months, and compared with control seedlings. At the end of the hardening period, water relations, gas exchange and morphological attributes were determined, and survival and growth under mesic and xeric transplanting conditions were assessed. Drought hardening increased drought tolerance primarily by affecting physiological traits, with no effect on shoot/root ratio or specific leaf mass. Drought hardening reduced osmotic potential at saturation and at the turgor loss point, stomatal conductance, residual transpiration (RT) and new root growth capacity (RGC), but enhanced cell membrane stability. Among treated seedlings, the largest response occurred in seedlings subjected to moderate hardening. Severe hardening reduced shoot soluble sugar concentration and increased shoot starch concentration. Increasing the duration of hardening had no effect on water relations but reduced shoot mineral and starch concentrations. Variation in cell membrane stability, RT and RGC were negatively related to osmotic adjustment. Despite differences in drought tolerance, no differences in mortality and relative growth rate were observed between hardening treatments when the seedlings were transplanted under either mesic or xeric conditions.
High spatial precision nano-imaging of polarization-sensitive plasmonic particles
NASA Astrophysics Data System (ADS)
Liu, Yunbo; Wang, Yipei; Lee, Somin Eunice
2018-02-01
Precise polarimetric imaging of polarization-sensitive nanoparticles is essential for resolving their accurate spatial positions beyond the diffraction limit. However, conventional technologies currently suffer from beam deviation errors which cannot be corrected beyond the diffraction limit. To overcome this issue, we experimentally demonstrate a spatially stable nano-imaging system for polarization-sensitive nanoparticles. In this study, we show that by integrating a voltage-tunable imaging variable polarizer with optical microscopy, we are able to suppress beam deviation errors. We expect that this nano-imaging system should allow for acquisition of accurate positional and polarization information from individual nanoparticles in applications where real-time, high precision spatial information is required.
Detection of IMRT delivery errors based on a simple constancy check of transit dose by using an EPID
NASA Astrophysics Data System (ADS)
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2015-11-01
Beam delivery errors during intensity modulated radiotherapy (IMRT) were detected based on a simple constancy check of the transit dose by using an electronic portal imaging device (EPID). Twenty-one IMRT plans were selected from various treatment sites, and the transit doses during treatment were measured by using an EPID. Transit doses were measured 11 times for each course of treatment, and the constancy check was based on gamma index (3%/3 mm) comparisons between a reference dose map (the first measured transit dose) and test dose maps (the following ten measured dose maps). In a simulation using an anthropomorphic phantom, the average passing rate of the tested transit dose was 100% for three representative treatment sites (head & neck, chest, and pelvis), indicating that IMRT was highly constant for normal beam delivery. The average passing rate of the transit dose for 1224 IMRT fields from 21 actual patients was 97.6% ± 2.5%, with the lower rate possibly being due to inaccuracies of patient positioning or anatomic changes. An EPIDbased simple constancy check may provide information about IMRT beam delivery errors during treatment.
NASA Astrophysics Data System (ADS)
Yang, Yanqiu; Yu, Lin; Zhang, Yixin
2017-04-01
A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.
Investigation of beam self-polarization in the future e + e - circular collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gianfelice-Wendt, E.
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the e +e - Future Circular Collider (FCC-e +e -) for Z and WW physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. As a result, preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-e +e - ring are presented.
Investigation of beam self-polarization in the future e + e - circular collider
Gianfelice-Wendt, E.
2016-10-24
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the e +e - Future Circular Collider (FCC-e +e -) for Z and WW physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. As a result, preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-e +e - ring are presented.
Investigation of beam self-polarization in the future e+e- circular collider
NASA Astrophysics Data System (ADS)
Gianfelice-Wendt, E.
2016-10-01
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the e+e- Future Circular Collider (FCC-e+e-) for Z and W W physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. Preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-e+e- ring are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
So, Aaron, E-mail: aso@robarts.ca
Purpose: The authors investigated the performance of a recently introduced 160-mm/256-row CT system for low dose quantitative myocardial perfusion (MP) imaging of the whole heart. This platform is equipped with a gantry capable of rotating at 280 ms per full cycle, a second generation of adaptive statistical iterative reconstruction (ASiR-V) to correct for image noise arising from low tube voltage potential/tube current dynamic scanning, and image reconstruction algorithms to tackle beam-hardening, cone-beam, and partial-scan effects. Methods: Phantom studies were performed to investigate the effectiveness of image noise and artifact reduction with a GE Healthcare Revolution CT system for three acquisitionmore » protocols used in quantitative CT MP imaging: 100, 120, and 140 kVp/25 mAs. The heart chambers of an anthropomorphic chest phantom were filled with iodinated contrast solution at different concentrations (contrast levels) to simulate the circulation of contrast through the heart in quantitative CT MP imaging. To evaluate beam-hardening correction, the phantom was scanned at each contrast level to measure the changes in CT number (in Hounsfield unit or HU) in the water-filled region surrounding the heart chambers with respect to baseline. To evaluate cone-beam artifact correction, differences in mean water HU between the central and peripheral slices were compared. Partial-scan artifact correction was evaluated from the fluctuation of mean water HU in successive partial scans. To evaluate image noise reduction, a small hollow region adjacent to the heart chambers was filled with diluted contrast, and contrast-to-noise ratio in the region before and after noise correction with ASiR-V was compared. The quality of MP maps acquired with the CT system was also evaluated in porcine CT MP studies. Myocardial infarct was induced in a farm pig from a transient occlusion of the distal left anterior descending (LAD) artery with a catheter-based interventional procedure. MP maps were generated from the dynamic contrast-enhanced (DCE) heart images taken at baseline and three weeks after the ischemic insult. Results: Their results showed that the phantom and animal images acquired with the CT platform were minimally affected by image noise and artifacts. For the beam-hardening phantom study, changes in water HU in the wall surrounding the heart chambers greatly reduced from >±30 to ≤ ± 5 HU at all kVp settings except one region at 100 kVp (7 HU). For the cone-beam phantom study, differences in mean water HU from the central slice were less than 5 HU at two peripheral slices with each 4 cm away from the central slice. These findings were reproducible in the pig DCE images at two peripheral slices that were 6 cm away from the central slice. For the partial-scan phantom study, standard deviations of the mean water HU in 10 successive partial scans were less than 5 HU at the central slice. Similar observations were made in the pig DCE images at two peripheral slices with each 6 cm away from the central slice. For the image noise phantom study, CNRs in the ASiR-V images were statistically higher (p < 0.05) than the non-ASiR-V images at all kVp settings. MP maps generated from the porcine DCE images were in excellent quality, with the ischemia in the LAD territory clearly seen in the three orthogonal views. Conclusions: The study demonstrates that this CT system can provide accurate and reproducible CT numbers during cardiac gated acquisitions across a wide axial field of view. This CT number fidelity will enable this imaging tool to assess contrast enhancement, potentially providing valuable added information beyond anatomic evaluation of coronary stenoses. Furthermore, their results collectively suggested that the 100 kVp/25 mAs protocol run on this CT system provides sufficient image accuracy at a low radiation dose (<3 mSv) for whole-heart quantitative CT MP imaging.« less
Real-Time Phase Correction Based on FPGA in the Beam Position and Phase Measurement System
NASA Astrophysics Data System (ADS)
Gao, Xingshun; Zhao, Lei; Liu, Jinxin; Jiang, Zouyi; Hu, Xiaofang; Liu, Shubin; An, Qi
2016-12-01
A fully digital beam position and phase measurement (BPPM) system was designed for the linear accelerator (LINAC) in Accelerator Driven Sub-critical System (ADS) in China. Phase information is obtained from the summed signals from four pick-ups of the Beam Position Monitor (BPM). Considering that the delay variations of different analog circuit channels would introduce phase measurement errors, we propose a new method to tune the digital waveforms of four channels before summation and achieve real-time error correction. The process is based on the vector rotation method and implemented within one single Field Programmable Gate Array (FPGA) device. Tests were conducted to evaluate this correction method and the results indicate that a phase correction precision better than ± 0.3° over the dynamic range from -60 dBm to 0 dBm is achieved.
Investigation of Fiber Optics Based Phased Locked Diode Lasers
NASA Technical Reports Server (NTRS)
Burke, Paul D.; Gregory, Don A.
1997-01-01
Optical power beaming requires a high intensity source and a system to address beam phase and location. A synthetic aperture array of phased locked sources can provide the necessary power levels as well as a means to correct for phase errors. A fiber optic phase modulator with a master oscillator and power amplifier (MOPA) using an injection-locking semiconductor optical amplifier has proven to be effective in correcting phase errors as large as 4pi in an interferometer system. Phase corrections with the piezoelectric fiber stretcher were made from 0 - 10 kHz, with most application oriented corrections requiring only 1 kHz. The amplifier did not lose locked power output while the phase was changed, however its performance was below expectation. Results of this investigation indicate fiber stretchers and amplifiers can be incorporated into a MOPA system to achieve successful earth based power beaming.
NASA Astrophysics Data System (ADS)
Han, Jianguang; Wang, Yun; Yu, Changqing; Chen, Peng
2017-02-01
An approach for extracting angle-domain common-image gathers (ADCIGs) from anisotropic Gaussian beam prestack depth migration (GB-PSDM) is presented in this paper. The propagation angle is calculated in the process of migration using the real-value traveltime information of Gaussian beam. Based on the above, we further investigate the effects of anisotropy on GB-PSDM, where the corresponding ADCIGs are extracted to assess the quality of migration images. The test results of the VTI syncline model and the TTI thrust sheet model show that anisotropic parameters ɛ, δ, and tilt angle 𝜃, have a great influence on the accuracy of the migrated image in anisotropic media, and ignoring any one of them will cause obvious imaging errors. The anisotropic GB-PSDM with the true anisotropic parameters can obtain more accurate seismic images of subsurface structures in anisotropic media.
Temperature feedback control for long-term carrier-envelope phase locking
Chang, Zenghu [Manhattan, KS; Yun, Chenxia [Manhattan, KS; Chen, Shouyuan [Manhattan, KS; Wang, He [Manhattan, KS; Chini, Michael [Manhattan, KS
2012-07-24
A feedback control module for stabilizing a carrier-envelope phase of an output of a laser oscillator system comprises a first photodetector, a second photodetector, a phase stabilizer, an optical modulator, and a thermal control element. The first photodetector may generate a first feedback signal corresponding to a first portion of a laser beam from an oscillator. The second photodetector may generate a second feedback signal corresponding to a second portion of the laser beam filtered by a low-pass filter. The phase stabilizer may divide the frequency of the first feedback signal by a factor and generate an error signal corresponding to the difference between the frequency-divided first feedback signal and the second feedback signal. The optical modulator may modulate the laser beam within the oscillator corresponding to the error signal. The thermal control unit may change the temperature of the oscillator corresponding to a signal operable to control the optical modulator.
Design and testing of focusing magnets for a compact electron linac
NASA Astrophysics Data System (ADS)
Chen, Qushan; Qin, Bin; Liu, Kaifeng; Liu, Xu; Fu, Qiang; Tan, Ping; Hu, Tongning; Pei, Yuanji
2015-10-01
Solenoid field errors have great influence on electron beam qualities. In this paper, design and testing of high precision solenoids for a compact electron linac is presented. We proposed an efficient and practical method to solve the peak field of the solenoid for relativistic electron beams based on the reduced envelope equation. Beam dynamics involving space charge force were performed to predict the focusing effects. Detailed optimization methods were introduced to achieve an ultra-compact configuration as well as high accuracy, with the help of the POISSON and OPERA packages. Efforts were attempted to restrain system errors in the off-line testing, which showed the short lens and the main solenoid produced a peak field of 0.13 T and 0.21 T respectively. Data analysis involving central and off axes was carried out and demonstrated that the testing results fitted well with the design.
Configuration study for a 30 GHz monolithic receive array, volume 1
NASA Technical Reports Server (NTRS)
Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.
1984-01-01
Gregorian, Cassegrain, and single reflector systems were analyzed in configuration studies for communications satellite receive antennas. Parametric design and performance curves were generated. A preliminary design of each reflector/feed system was derived including radiating elements, beam-former network, beamsteering system, and MMIC module architecture. Performance estimates and component requirements were developed for each design. A recommended design was selected for both the scanning beam and the fixed beam case. Detailed design and performance analysis results are presented for the selected Cassegrain configurations. The final design point is characterized in detail and performance measures evaluated in terms of gain, sidelobe level, noise figure, carrier-to-interference ratio, prime power, and beamsteering. The effects of mutual coupling and excitation errors (including phase and amplitude quantization errors) are evaluated. Mechanical assembly drawings are given for the final design point. Thermal design requirements are addressed in the mechanical design.
Parity Nonconservation in Proton-Proton and Proton-Water Scattering at 1.5 GeV/c
DOE R&D Accomplishments Database
Mischke, R. E.; Bowman, J. D.; Carlini, R.; MacArthur, D.; Nagle, D. E.; Frauenfelder, H.; Harper, R. W.; Yuan, V.; McDonald, A. B.; Talaga, R. L.
1984-07-01
Experiments searching for parity nonconservation in the scattering of 1.5 GeV/c (800 MeV) polarized protons from an unpolarized water target and a liquid hydrogen target are described. The intensity of the incident proton beam was measured upstream and downstream of the target by a pair of ionization detectors. The beam helicity was reversed at a 30-Hz rate. Auxiliary detectors monitored beam properties that could give rise to false effects. The result for the longitudinal asymmetry from the water is A{sub L} = (1.7 +- 3.3 +- 1.4) x 10{sup -7}, where the first error is statistical and the second is an estimate of systematic effects. The hydrogen data yield a preliminary result of A{sub L} = (1.0 +- 1.6) x 10{sup -7}. The systematic errors for p-p are expected to be < 1 x 10{sup -7}.
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
Comprehensive surface treatment of high-speed steel tool
NASA Astrophysics Data System (ADS)
Fedorov, Sergey V.; Aleshin, Sergey V.; Swe, Min Htet; Abdirova, Raushan D.; Kapitanov, Alexey V.; Egorov, Sergey B.
2018-03-01
One of the promising directions of hardening of high-speed steel tool is the creation on their surface of the layered structures with the gradient of physic-chemical properties between the wear-resistant coatings to the base material. Among the methods of such surface modification, a special process takes place based on the use of pulsed high-intensity charged particle beams. The high speed of heating and cooling allows structural-phase transformations in the surface layer, which cannot be realized in a stationary mode. The treatment was conducted in a RITM-SP unit, which constitutes a combination of a source of low-energy high-current electron beams "RITM" and two magnetron spraying systems on a single vacuum chamber. The unit enables deposition of films on the surface of the desired product and subsequent liquid-phase mixing of materials of the film and the substrate by an intense pulse electron beam. The article discusses features of the structure of the subsurface layer of high-speed steel M2, modified by surface alloying of a low-energy high-current electron beam, and its effect on the wear resistance of the tool when dry cutting hard to machine Nickel alloy. A significant decrease of intensity of wear of high-speed steel with combined treatment happens due to the displacement of the zone of wear and decrease the radius of rounding of the cutting edge because of changes in conditions of interaction with the material being treated.
Lundquist, J. K.; Churchfield, M. J.; Lee, S.; ...
2015-02-23
Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormore » complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s -1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s −1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s -1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less
NASA Astrophysics Data System (ADS)
Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.
2015-02-01
Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s-1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 m s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
Optimization of pencil beam f-theta lens for high-accuracy metrology
NASA Astrophysics Data System (ADS)
Peng, Chuanqian; He, Yumei; Wang, Jie
2018-01-01
Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.
Infrared/microwave (IR/MW) micromirror array beam combiner design and analysis.
Tian, Yi; Lv, Lijun; Jiang, Liwei; Wang, Xin; Li, Yanhong; Yu, Haiming; Feng, Xiaochen; Li, Qi; Zhang, Li; Li, Zhuo
2013-08-01
We investigated the design method of an infrared (IR)/microwave (MW) micromirror array type of beam combiner. The size of micromirror is in microscopic levels and comparable to MW wavelengths, so that the MW will not react in these dimensions, whereas the much shorter optical wavelengths will be reflected by them. Hence, the MW multilayered substrate was simplified and designed using transmission line theory. The beam combiner used an IR wavefront-division imaging technique to reflect the IR radiation image to the unit under test (UUT)'s pupil in a parallel light path. In addition, the boresight error detected by phase monopulse radar was analyzed using a moment-of method (MoM) and multilevel fast multipole method (MLFMM) acceleration technique. The boresight error introduced by the finite size of the beam combiner was less than 1°. Finally, in order to verify the wavefront-division imaging technique, a prototype of a micromirror array was fabricated, and IR images were tested. The IR images obtained by the thermal imager verified the correctness of the wavefront-division imaging technique.
Automatic alignment of double optical paths in excimer laser amplifier
NASA Astrophysics Data System (ADS)
Wang, Dahui; Zhao, Xueqing; Hua, Hengqi; Zhang, Yongsheng; Hu, Yun; Yi, Aiping; Zhao, Jun
2013-05-01
A kind of beam automatic alignment method used for double paths amplification in the electron pumped excimer laser system is demonstrated. In this way, the beams from the amplifiers can be transferred along the designated direction and accordingly irradiate on the target with high stabilization and accuracy. However, owing to nonexistence of natural alignment references in excimer laser amplifiers, two cross-hairs structure is used to align the beams. Here, one crosshair put into the input beam is regarded as the near-field reference while the other put into output beam is regarded as the far-field reference. The two cross-hairs are transmitted onto Charge Coupled Devices (CCD) by image-relaying structures separately. The errors between intersection points of two cross-talk images and centroid coordinates of actual beam are recorded automatically and sent to closed loop feedback control mechanism. Negative feedback keeps running until preset accuracy is reached. On the basis of above-mentioned design, the alignment optical path is built and the software is compiled, whereafter the experiment of double paths automatic alignment in electron pumped excimer laser amplifier is carried through. Meanwhile, the related influencing factors and the alignment precision are analyzed. Experimental results indicate that the alignment system can achieve the aiming direction of automatic aligning beams in short time. The analysis shows that the accuracy of alignment system is 0.63μrad and the beam maximum restoration error is 13.75μm. Furthermore, the bigger distance between the two cross-hairs, the higher precision of the system is. Therefore, the automatic alignment system has been used in angular multiplexing excimer Main Oscillation Power Amplification (MOPA) system and can satisfy the requirement of beam alignment precision on the whole.
Laser surface processing on sintered PM alloys
NASA Astrophysics Data System (ADS)
Reiter, Wilfred; Daurelio, Giuseppe; Ludovico, Antonio D.
1997-08-01
Usually the P.M. alloys are heat treated like case hardening, gas nitriding or plasma nitriding for a better wear resistance of the product surface. There is an additional method for gaining better tribological properties and this is the surface hardening (or remelting or alloying) of the P.M. alloy by laser treatment on a localized part of the product without heating the whole sample. This work gives a cured experimentation about the proper sintering powder alloys for laser surface processing from the point of view of wear, fatigue life and surface quality. As concerns the materials three different basic alloy groups with graduated carbon contents were prepared. Regarding these sintered powder alloys one group holds Fe, Mo and C and other group holds Fe, Ni, Mo and C and the last one holds Fe, Ni, Cu, Mo and C contents. Obviously each group has a different surface hardness, different porosity distribution, different density and diverse metallurgical structures (pearlite or ferrite-pearlite, etc.). ON the sample surfaces a colloidal graphite coating, in different thicknesses, has been sprayed to increase laser energy surface absorption. On some other samples a Mo coating, in different thicknesses, has been produced (on the bulk alloy) by diverse deposition techniques (D.C. Sputtering, P.V.D. and Flame Spraying). Only a few samples have a Mo coating and also an absorber coating, that is a bulk material- Mo and a colloidal graphite coating. All these sintered alloys have been tested by laser technology; so that, many laser working parameters (covering gas, work-speed, focussed and defocussed spot, rastered and integrated beam spots, square and rectangular beam shapes and so on) have been experimented for two different processes at constant laser power and at constant surface temperature (by using a temperature surface sensor and a closed controlled link). For all experiments a transverse fast axial flow CO2 2.5 kW c.w. laser source has been employed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, K; DiCostanzo, D; Gupta, N
Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared againstmore » the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly reduced for more complex geometries.« less
NASA Astrophysics Data System (ADS)
Jeon, Pil-Hyun; Kim, Hee-Joung; Lee, Chang-Lae; Kim, Dae-Hong; Lee, Won-Hyung; Jeon, Sung-Su
2012-06-01
For a considerable number of emergency computed tomography (CT) scans, patients are unable to position their arms above their head due to traumatic injuries. The arms-down position has been shown to reduce image quality with beam-hardening artifacts in the dorsal regions of the liver, spleen, and kidneys, rendering these images non-diagnostic. The purpose of this study was to evaluate the effect of arm position on the image quality in patients undergoing whole-body CT. We acquired CT scans with various acquisition parameters at voltages of 80, 120, and 140 kVp and an increasing tube current from 200 to 400 mAs in 50 mAs increments. The image noise and the contrast assessment were considered for quantitative analyses of the CT images. The image noise (IN), the contrast-to-noise ratio (CNR), the signal-to-noise ratio (SNR), and the coefficient of variation (COV) were evaluated. Quantitative analyses of the experiments were performed with CT scans representative of five different arm positions. Results of the CT scans acquired at 120 kVp and 250 mAs showed high image quality in patients with both arms raised above the head (SNR: 12.4, CNR: 10.9, and COV: 8.1) and both arms flexed at the elbows on the chest (SNR: 11.5, CNR: 10.2, and COV: 8.8) while the image quality significantly decreased with both arms in the down position (SNR: 9.1, CNR: 7.6, and COV: 11). Both arms raised, one arm raised, and both arms flexed improved the image quality compared to arms in the down position by reducing beam-hardening and streak artifacts caused by the arms being at the side of body. This study provides optimal methods for achieving higher image quality and lower noise in abdominal CT for trauma patients.
Effect of beam hardening on transmural myocardial perfusion quantification in myocardial CT imaging
NASA Astrophysics Data System (ADS)
Fahmi, Rachid; Eck, Brendan L.; Levi, Jacob; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.
2016-03-01
The detection of subendocardial ischemia exhibiting an abnormal transmural perfusion gradient (TPG) may help identify ischemic conditions due to micro-vascular dysfunction. We evaluated the effect of beam hardening (BH) artifacts on TPG quantification using myocardial CT perfusion (CTP). We used a prototype spectral detector CT scanner (Philips Healthcare) to acquire dynamic myocardial CTP scans in a porcine ischemia model with partial occlusion of the left anterior descending (LAD) coronary artery guided by pressure wire-derived fractional flow reserve (FFR) measurements. Conventional 120 kVp and 70 keV projection-based mono-energetic images were reconstructed from the same projection data and used to compute myocardial blood flow (MBF) using the Johnson-Wilson model. Under moderate LAD occlusion (FFR~0.7), we used three 5 mm short axis slices and divided the myocardium into three LAD segments and three remote segments. For each slice and each segment, we characterized TPG as the mean "endo-to-epi" transmural flow ratio (TFR). BH-induced hypoenhancement on the ischemic anterior wall at 120 kVp resulted in significantly lower mean TFR value as compared to the 70 keV TFR value (0.29+/-0.01 vs. 0.55+/-0.01 p<1e-05). No significant difference was measured between 120 kVp and 70 keV mean TFR values on segments moderately affected or unaffected by BH. In the entire ischemic LAD territory, 120 kVp mean endocardial flow was significantly reduced as compared to mean epicardial flow (15.80+/-10.98 vs. 40.85+/-23.44 ml/min/100g; p<1e-04). At 70 keV, BH was effectively minimized resulting in mean endocardial MBF of 40.85+/-15.3407 ml/min/100g vs. 74.09+/-5.07 ml/min/100g (p=0.0054) in the epicardium. We also found that BH artifact in the conventional 120 kVp images resulted in falsely reduced MBF measurements even under non-ischemic conditions.
Carrascosa, Patricia; Cipriano, Silvina; De Zan, Macarena; Deviggiano, Alejandro; Capunay, Carlos; Cury, Ricardo C.
2015-01-01
Background Myocardial computed tomography perfusion (CTP) using conventional single energy (SE) imaging is influenced by the presence of beam hardening artifacts (BHA), occasionally resembling perfusion defects and commonly observed at the left ventricular posterobasal wall (PB). We therefore sought to explore the ability of dual energy (DE) CTP to attenuate the presence of BHA. Methods Consecutive patients without history of coronary artery disease who were referred for computed tomography coronary angiography (CTCA) due to atypical chest pain and a normal stress-rest SPECT and had absence or mild coronary atherosclerosis constituted the study population. The study group was acquired using DE and the control group using SE imaging. Results Demographical characteristics were similar between groups, as well as the heart rate and the effective radiation dose. Myocardial signal density (SD) levels were evaluated in 280 basal segments among the DE group (140 PB segments for each energy level from 40 to 100 keV; and 140 reference segments), and in 40 basal segments (at the same locations) among the SE group. Among the DE group, myocardial SD levels and myocardial SD ratio evaluated at the reference segment were higher at low energy levels, with significantly lower SD levels at increasing energy levels. Myocardial signal-to-noise ratio was not significantly influenced by the energy level applied, although 70 keV was identified as the energy level with the best overall signal-to-noise ratio. Significant differences were identified between the PB segment and the reference segment among the lower energy levels, whereas at ≥70 keV myocardial SD levels were similar. Compared to DE reconstructions at the best energy level (70 keV), SE acquisitions showed no significant differences overall regarding myocardial SD levels among the reference segments. Conclusions BHA that influence the assessment of myocardial perfusion can be attenuated using DE at 70 keV or higher. PMID:25774354
Quality correction factors of composite IMRT beam deliveries: theoretical considerations.
Bouchard, Hugo
2012-11-01
In the scope of intensity modulated radiation therapy (IMRT) dosimetry using ionization chambers, quality correction factors of plan-class-specific reference (PCSR) fields are theoretically investigated. The symmetry of the problem is studied to provide recommendable criteria for composite beam deliveries where correction factors are minimal and also to establish a theoretical limit for PCSR delivery k(Q) factors. The concept of virtual symmetric collapsed (VSC) beam, being associated to a given modulated composite delivery, is defined in the scope of this investigation. Under symmetrical measurement conditions, any composite delivery has the property of having a k(Q) factor identical to its associated VSC beam. Using this concept of VSC, a fundamental property of IMRT k(Q) factors is demonstrated in the form of a theorem. The sensitivity to the conditions required by the theorem is thoroughly examined. The theorem states that if a composite modulated beam delivery produces a uniform dose distribution in a volume V(cyl) which is symmetric with the cylindrical delivery and all beams fulfills two conditions in V(cyl): (1) the dose modulation function is unchanged along the beam axis, and (2) the dose gradient in the beam direction is constant for a given lateral position; then its associated VSC beam produces no lateral dose gradient in V(cyl), no matter what beam modulation or gantry angles are being used. The examination of the conditions required by the theorem lead to the following results. The effect of the depth-dose gradient not being perfectly constant with depth on the VSC beam lateral dose gradient is found negligible. The effect of the dose modulation function being degraded with depth on the VSC beam lateral dose gradient is found to be only related to scatter and beam hardening, as the theorem holds also for diverging beams. The use of the symmetry of the problem in the present paper leads to a valuable theorem showing that k(Q) factors of composite IMRT beam deliveries are close to unity under specific conditions. The theoretical limit k(Q(pcsr),Q(msr) ) (f(pcsr),f(msr) )=1 is determined based on the property of PCSR deliveries to provide a uniform dose in the target volume. The present approach explains recent experimental observations and proposes ideal conditions for IMRT reference dosimetry. The result of this study could potentially serve as a theoretical basis for reference dosimetry of composite IMRT beam deliveries or for routine IMRT quality assurance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.
Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2014-12-04
The aim of this study is to evaluate the ability of transit dosimetry using commercial treatment planning system (TPS) and an electronic portal imaging device (EPID) with simple calibration method to verify the beam delivery based on detection of large errors in treatment room. Twenty four fields of intensity modulated radiotherapy (IMRT) plans were selected from four lung cancer patients and used in the irradiation of an anthropomorphic phantom. The proposed method was evaluated by comparing the calculated dose map from TPS and EPID measurement on the same plane using a gamma index method with a 3% dose and 3 mm distance-to-dose agreement tolerance limit. In a simulation using a homogeneous plastic water phantom, performed to verify the effectiveness of the proposed method, the average passing rate of the transit dose based on gamma index was high enough, averaging 94.2% when there was no error during beam delivery. The passing rate of the transit dose for 24 IMRT fields was lower with the anthropomorphic phantom, averaging 86.8% ± 3.8%, a reduction partially due to the inaccuracy of TPS calculations for inhomogeneity. Compared with the TPS, the absolute value of the transit dose at the beam center differed by -0.38% ± 2.1%. The simulation study indicated that the passing rate of the gamma index was significantly reduced, to less than 40%, when a wrong field was erroneously irradiated to patient in the treatment room. This feasibility study suggested that transit dosimetry based on the calculation with commercial TPS and EPID measurement with simple calibration can provide information about large errors for treatment beam delivery.
Simulator for beam-based LHC collimator alignment
NASA Astrophysics Data System (ADS)
Valentino, Gianluca; Aßmann, Ralph; Redaelli, Stefano; Sammut, Nicholas
2014-02-01
In the CERN Large Hadron Collider, collimators need to be set up to form a multistage hierarchy to ensure efficient multiturn cleaning of halo particles. Automatic algorithms were introduced during the first run to reduce the beam time required for beam-based setup, improve the alignment accuracy, and reduce the risk of human errors. Simulating the alignment procedure would allow for off-line tests of alignment policies and algorithms. A simulator was developed based on a diffusion beam model to generate the characteristic beam loss signal spike and decay produced when a collimator jaw touches the beam, which is observed in a beam loss monitor (BLM). Empirical models derived from the available measurement data are used to simulate the steady-state beam loss and crosstalk between multiple BLMs. The simulator design is presented, together with simulation results and comparison to measurement data.
NASA Astrophysics Data System (ADS)
Sacha, Jan; Snehota, Michal; Jelinkova, Vladimira
2016-04-01
Information on spatial and temporal water and air distribution in a soil sample during hydrological processes is important for evaluating current and developing new water transport models. Modern imaging techniques such as neutron imaging (NI) allow relatively short acquisition times and high resolution of images. At the same time, the appropriate data processing has to be applied to obtain results free of bias and artifacts. In this study a ponded infiltration experiments were conducted on two soil samples packed into the quartz glass columns of inner diameter of 29 and 34 mm, respectively. First sample was prepared by packing of fine and coarse fractions of sand and the second sample was packed using coarse sand and disks of fine porous ceramic. Ponded infiltration experiments conducted on both samples were monitored by neutron radiography to produce two dimensional (2D) projection images during the transient phase of infiltration. During the steady state flow stage of experiments neutron tomography was utilized to obtain three-dimensional (3D) information on gradual water redistribution. The acquired radiographic images were normalized for background noise and spatial inhomogeneity of the detector, fluctuations of the neutron flux in time and for spatial inhomogeneity of the neutron beam. The radiograms of dry sample were subtracted from all subsequent radiograms to determine water thickness in the 2D projection images. All projections were corrected for beam hardening and neutron scattering by empirical method of Kang et al. (2013). Parameters of the correction method uses were identified by two different approaches. The first approach was based on fitting the NI derived water thickness representing the water filled region in the layer of water above the sample surface to actual water thickness. In the second approach the NI derived volume of water in the entire sample in given time was fitted to corresponding gravimetrically determined amount of water in the sample. Tomography images were reconstructed from the both corrected and uncorrected water thickness maps to obtain the 3D spatial distribution of water content within the sample. Without the correction the beam hardening and scattering effects overestimated the water content values close to the sample perimeter and underestimated the values close to the center of the sample, however the total water content of whole sample was the same in both cases.
Effect of cutting parameters on strain hardening of nickel–titanium shape memory alloy
NASA Astrophysics Data System (ADS)
Wang, Guijie; Liu, Zhanqiang; Ai, Xing; Huang, Weimin; Niu, Jintao
2018-07-01
Nickel–titanium shape memory alloy (SMA) has been widely used as implant materials due to its good biocompatibility, shape memory property and super-elasticity. However, the severe strain hardening is a main challenge due to cutting force and temperature caused by machining. An orthogonal experiment of nickel–titanium SMA with different milling parameters conditions was conducted in this paper. On the one hand, the effect of cutting parameters on work hardening is obtained. It is found that the cutting speed has the most important effect on work hardening. The depth of machining induced layer and the degree of hardening become smaller with the increase of cutting speed when the cutting speed is less than 200 m min‑1 and then get larger with further increase of cutting speed. The relative intensity of diffraction peak increases as the cutting speed increase. In addition, all of the depth of machining induced layer, the degree of hardening and the relative intensity of diffraction peak increase when the feed rate increases. On the other hand, it is found that the depth of machining induced layer is closely related with the degree of hardening and phase transition. The higher the content of austenite in the machined surface is, the higher the degree of hardening will be. The depth of the machining induced layer increases with the degree of hardening increasing.
Stress Corrosion Cracking Behavior of Hardening-Treated 13Cr Stainless Steel
NASA Astrophysics Data System (ADS)
Niu, Li-Bin; Ishitake, Hisamitsu; Izumi, Sakae; Shiokawa, Kunio; Yamashita, Mitsuo; Sakai, Yoshihiro
2018-03-01
Stress corrosion cracking (SCC) behavior of the hardening-treated materials of 13Cr stainless steel was examined with SSRT tests and constant load tests. In the simulated geothermal water and even in the test water without addition of impurities, the hardening-treated materials showed a brittle intergranular fracture due to the sensitization, which was caused by the present hardening-treatments.
NASA Astrophysics Data System (ADS)
Paziresh, M.; Kingston, A. M.; Latham, S. J.; Fullagar, W. K.; Myers, G. M.
2016-06-01
Dual-energy computed tomography and the Alvarez and Macovski [Phys. Med. Biol. 21, 733 (1976)] transmitted intensity (AMTI) model were used in this study to estimate the maps of density (ρ) and atomic number (Z) of mineralogical samples. In this method, the attenuation coefficients are represented [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)] in the form of the two most important interactions of X-rays with atoms that is, photoelectric absorption (PE) and Compton scattering (CS). This enables material discrimination as PE and CS are, respectively, dependent on the atomic number (Z) and density (ρ) of materials [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)]. Dual-energy imaging is able to identify sample materials even if the materials have similar attenuation coefficients at single-energy spectrum. We use the full model rather than applying one of several applied simplified forms [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976); Siddiqui et al., SPE Annual Technical Conference and Exhibition (Society of Petroleum Engineers, 2004); Derzhi, U.S. patent application 13/527,660 (2012); Heismann et al., J. Appl. Phys. 94, 2073-2079 (2003); Park and Kim, J. Korean Phys. Soc. 59, 2709 (2011); Abudurexiti et al., Radiol. Phys. Technol. 3, 127-135 (2010); and Kaewkhao et al., J. Quant. Spectrosc. Radiat. Transfer 109, 1260-1265 (2008)]. This paper describes the tomographic reconstruction of ρ and Z maps of mineralogical samples using the AMTI model. The full model requires precise knowledge of the X-ray energy spectra and calibration of PE and CS constants and exponents of atomic number and energy that were estimated based on fits to simulations and calibration measurements. The estimated ρ and Z images of the samples used in this paper yield average relative errors of 2.62% and 1.19% and maximum relative errors of 2.64% and 7.85%, respectively. Furthermore, we demonstrate that the method accounts for the beam hardening effect in density (ρ) and atomic number (Z) reconstructions to a significant extent.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
Spaceflight Ka-Band High-Rate Radiation-Hard Modulator
NASA Technical Reports Server (NTRS)
Jaso, Jeffery M.
2011-01-01
A document discusses the creation of a Ka-band modulator developed specifically for the NASA/GSFC Solar Dynamics Observatory (SDO). This flight design consists of a high-bandwidth, Quadriphase Shift Keying (QPSK) vector modulator with radiation-hardened, high-rate driver circuitry that receives I and Q channel data. The radiationhard design enables SDO fs Ka-band communications downlink system to transmit 130 Mbps (300 Msps after data encoding) of science instrument data to the ground system continuously throughout the mission fs minimum life of five years. The low error vector magnitude (EVM) of the modulator lowers the implementation loss of the transmitter in which it is used, thereby increasing the overall communication system link margin. The modulator comprises a component within the SDO transmitter, and meets the following specifications over a 0 to 40 C operational temperature range: QPSK/OQPSK modulator, 300-Msps symbol rate, 26.5-GHz center frequency, error vector magnitude less than or equal to 10 percent rms, and compliance with the NTIA (National Telecommunications and Information Administration) spectral mask.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
Technical aspects of real time positron emission tracking for gated radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamberland, Marc; Xu, Tong, E-mail: txu@physics.carleton.ca; McEwen, Malcolm R.
2016-02-15
Purpose: Respiratory motion can lead to treatment errors in the delivery of radiotherapy treatments. Respiratory gating can assist in better conforming the beam delivery to the target volume. We present a study of the technical aspects of a real time positron emission tracking system for potential use in gated radiotherapy. Methods: The tracking system, called PeTrack, uses implanted positron emission markers and position sensitive gamma ray detectors to track breathing motion in real time. PeTrack uses an expectation–maximization algorithm to track the motion of fiducial markers. A normalized least mean squares adaptive filter predicts the location of the markers amore » short time ahead to account for system response latency. The precision and data collection efficiency of a prototype PeTrack system were measured under conditions simulating gated radiotherapy. The lung insert of a thorax phantom was translated in the inferior–superior direction with regular sinusoidal motion and simulated patient breathing motion (maximum amplitude of motion ±10 mm, period 4 s). The system tracked the motion of a {sup 22}Na fiducial marker (0.34 MBq) embedded in the lung insert every 0.2 s. The position of the was marker was predicted 0.2 s ahead. For sinusoidal motion, the equation used to model the motion was fitted to the data. The precision of the tracking was estimated as the standard deviation of the residuals. Software was also developed to communicate with a Linac and toggle beam delivery. In a separate experiment involving a Linac, 500 monitor units of radiation were delivered to the phantom with a 3 × 3 cm photon beam and with 6 and 10 MV accelerating potential. Radiochromic films were inserted in the phantom to measure spatial dose distribution. In this experiment, the period of motion was set to 60 s to account for beam turn-on latency. The beam was turned off when the marker moved outside of a 5-mm gating window. Results: The precision of the tracking in the IS direction was 0.53 mm for a sinusoidally moving target, with an average count rate ∼250 cps. The average prediction error was 1.1 ± 0.6 mm when the marker moved according to irregular patient breathing motion. Across all beam deliveries during the radiochromic film measurements, the average prediction error was 0.8 ± 0.5 mm. The maximum error was 2.5 mm and the 95th percentile error was 1.5 mm. Clear improvement of the dose distribution was observed between gated and nongated deliveries. The full-width at halfmaximum of the dose profiles of gated deliveries differed by 3 mm or less than the static reference dose distribution. Monitoring of the beam on/off times showed synchronization with the location of the marker within the latency of the system. Conclusions: PeTrack can track the motion of internal fiducial positron emission markers with submillimeter precision. The system can be used to gate the delivery of a Linac beam based on the position of a moving fiducial marker. This highlights the potential of the system for use in respiratory-gated radiotherapy.« less
Kinematic hardening of a porous limestone
NASA Astrophysics Data System (ADS)
Cheatham, J. B.; Allen, M. B.; Celle, C. C.
1984-10-01
A concept for a kinematic hardening yield surface in stress space for Cordova Cream limestone (Austin Chalk) developed by Celle and Cheatham (1981) has been improved using Ziegler's modification of Prager's hardening rule (Ziegler, 1959). Data to date agree with the formulated concepts. It is shown how kinematic hardening can be used to approximate the yield surface for a wide range of stress states past the initial yield surface. The particular difficulty of identifying the yield surface under conditions of unloading or extension is noted. A yield condition and hardening rule which account for the strain induced anisotropy in Cordova Cream Limestone were developed. Although the actual yield surface appears to involve some change of size and shape, it is concluded that true kinematic hardening provides a basis for engineering calculations.
Radiation Hardened Electronics for Extreme Environments
NASA Technical Reports Server (NTRS)
Keys, Andrew S.; Watson, Michael D.
2007-01-01
The Radiation Hardened Electronics for Space Environments (RHESE) project consists of a series of tasks designed to develop and mature a broad spectrum of radiation hardened and low temperature electronics technologies. Three approaches are being taken to address radiation hardening: improved material hardness, design techniques to improve radiation tolerance, and software methods to improve radiation tolerance. Within these approaches various technology products are being addressed including Field Programmable Gate Arrays (FPGA), Field Programmable Analog Arrays (FPAA), MEMS Serial Processors, Reconfigurable Processors, and Parallel Processors. In addition to radiation hardening, low temperature extremes are addressed with a focus on material and design approaches.
Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.
2013-01-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2014-06-30
In this paper, a novel adaptive cooperative protocol with multiple relays using detect-and-forward (DF) over atmospheric turbulence channels with pointing errors is proposed. The adaptive DF cooperative protocol here analyzed is based on the selection of the optical path, source-destination or different source-relay links, with a greater value of fading gain or irradiance, maintaining a high diversity order. Closed-form asymptotic bit error-rate (BER) expressions are obtained for a cooperative free-space optical (FSO) communication system with Nr relays, when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions, following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. A greater robustness for different link distances and pointing errors is corroborated by the obtained results if compared with similar cooperative schemes or equivalent multiple-input multiple-output (MIMO) systems. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results.
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
MEMS-based wide-bandwidth electromagnetic energy harvester with electroplated nickel structure
NASA Astrophysics Data System (ADS)
Sun, Shi; Dai, Xuhan; Sun, Yunna; Xiang, Xiaojian; Ding, Guifu; Zhao, Xiaolin
2017-11-01
A novel nickel-based nonlinear electromagnetic energy harvester has been designed, fabricated, and characterized in this work. Electroplated nickel is very suitable for a stretching-based mechanism to broaden the bandwidth due to its good process and mechanical properties. A strong hardening nonlinearity is induced due to the large deformation of the thin nickel based guided-beam structure. Combining the merits of both the mechanical properties and guided-beam structure, the energy harvester shows good bandwidth performance. It is found that increasing the thickness of the central platform could guarantee nonlinearity. Static and dynamic models of the energy harvester are simulated and validated. Test results show that the energy harvester has good repeatability without any destruction under a large deformation condition. At the acceleration of 0.5 g, comparative large bandwidths of 129 and 59 Hz are obtained for displacement and RMS output voltage, respectively. Power output of 3.4 µW and normalized power density of 125.92 µW cm-3 g-2 are achieved with the load resistance of 38 Ω.
Sanchez, Sophie; Fernandez, Vincent; Pierce, Stephanie E; Tafforeau, Paul
2013-09-01
Propagation phase-contrast synchrotron radiation microtomography (PPC-SRμCT) has proved to be very successful for examining fossils. Because fossils range widely in taphonomic preservation, size, shape and density, X-ray computed tomography protocols are constantly being developed and refined. Here we present a 1-h procedure that combines a filtered high-energy polychromatic beam with long-distance PPC-SRμCT (sample to detector: 4-16 m) and an attenuation protocol normalizing the absorption profile (tested on 13-cm-thick and 5.242 g cm(-3) locally dense samples but applicable to 20-cm-thick samples). This approach provides high-quality imaging results, which show marked improvement relative to results from images obtained without the attenuation protocol in apparent transmission, contrast and signal-to-noise ratio. The attenuation protocol involves immersing samples in a tube filled with aluminum or glass balls in association with a U-shaped aluminum profiler. This technique therefore provides access to a larger dynamic range of the detector used for tomographic reconstruction. This protocol homogenizes beam-hardening artifacts, thereby rendering it effective for use with conventional μCT scanners.
A nonlinear analysis of the terahertz serpentine waveguide traveling-wave amplifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ke, E-mail: like.3714@163.com; Cao, Miaomiao, E-mail: mona486@yeah.net; Institute of Electronics, University of Chinese Academy of Sciences, Beijing 100190
A nonlinear model for the numerical simulation of terahertz serpentine waveguide traveling-wave tube (SW-TWT) is described. In this model, the electromagnetic wave transmission in the SW is represented as an infinite set of space harmonics to interact with an electron beam. Analytical expressions for axial electric fields in axisymmetric interaction gaps of SW-TWTs are derived and compared with the results from CST simulation. The continuous beam is treated as discrete macro-particles with different initial phases. The beam-tunnel field equations, space-charge field equations, and motion equations are combined to solve the beam-wave interaction. The influence of backward wave and relativistic effectmore » is also considered in the series of equations. The nonlinear model is used to design a 340 GHz SW-TWT. Several favorable comparisons of model predictions with results from a 3-D Particle-in-cell simulation code CHIPIC are presented, in which the output power versus beam voltage and interaction periods are illustrated. The relative error of the predicted output power is less than 15% in the 3 dB bandwidth and the relative error of the saturated length is less than 8%.The results show that the 1-D nonlinear analysis model is appropriate to solve the terahertz SW-TWT operation characteristics.« less
NASA Astrophysics Data System (ADS)
Almeida, Isabel P.; Schyns, Lotte E. J. R.; Vaniqui, Ana; van der Heyden, Brent; Dedes, George; Resch, Andreas F.; Kamp, Florian; Zindler, Jaap D.; Parodi, Katia; Landry, Guillaume; Verhaegen, Frank
2018-06-01
Proton beam ranges derived from dual-energy computed tomography (DECT) images from a dual-spiral radiotherapy (RT)-specific CT scanner were assessed using Monte Carlo (MC) dose calculations. Images from a dual-source and a twin-beam DECT scanner were also used to establish a comparison to the RT-specific scanner. Proton ranges extracted from conventional single-energy CT (SECT) were additionally performed to benchmark against literature values. Using two phantoms, a DECT methodology was tested as input for GEANT4 MC proton dose calculations. Proton ranges were calculated for different mono-energetic proton beams irradiating both phantoms; the results were compared to the ground truth based on the phantom compositions. The same methodology was applied in a head-and-neck cancer patient using both SECT and dual-spiral DECT scans from the RT-specific scanner. A pencil-beam-scanning plan was designed, which was subsequently optimized by MC dose calculations, and differences in proton range for the different image-based simulations were assessed. For phantoms, the DECT method yielded overall better material segmentation with >86% of the voxel correctly assigned for the dual-spiral and dual-source scanners, but only 64% for a twin-beam scanner. For the calibration phantom, the dual-spiral scanner yielded range errors below 1.2 mm (0.6% of range), like the errors yielded by the dual-source scanner (<1.1 mm, <0.5%). With the validation phantom, the dual-spiral scanner yielded errors below 0.8 mm (0.9%), whereas SECT yielded errors up to 1.6 mm (2%). For the patient case, where the absolute truth was missing, proton range differences between DECT and SECT were on average in ‑1.2 ± 1.2 mm (‑0.5% ± 0.5%). MC dose calculations were successfully performed on DECT images, where the dual-spiral scanner resulted in media segmentation and range accuracy as good as the dual-source CT. In the patient, the various methods showed relevant range differences.
Villar-Salvador, Pedro; Peñuelas, Juan L; Jacobs, Douglass F
2013-02-01
Functional attributes determine the survival and growth of planted seedlings in reforestation projects. Nitrogen (N) and water are important resources in the cultivation of forest species, which have a strong effect on plant functional traits. We analyzed the influence of N nutrition on drought acclimation of Pinus pinea L. seedlings. Specifically, we addressed if high N fertilization reduces drought and frost tolerance of seedlings and whether drought hardening reverses the effect of high N fertilization on stress tolerance. Seedlings were grown under two N fertilization regimes (6 and 100 mg N per plant) and subjected to three drought-hardening levels (well-watered, moderate and strong hardening). Water relations, gas exchange, frost damage, N concentration and growth at the end of the drought-hardening period, and survival and growth of seedlings under controlled xeric and mesic outplanting conditions were measured. Relative to low-N plants, high-N plants were larger, had higher stomatal conductance (27%), residual transpiration (11%) and new root growth capacity and closed stomata at higher water potential. However, high N fertilization also increased frost damage (24%) and decreased plasmalemma stability to dehydration (9%). Drought hardening reversed to a great extent the reduction in stress tolerance caused by high N fertilization as it decreased frost damage, stomatal conductance and residual transpiration by 21, 31 and 24%, respectively, and increased plasmalemma stability to dehydration (8%). Drought hardening increased tissue non-structural carbohydrates and N concentration, especially in high-fertilized plants. Frost damage was positively related to the stability of plasmalemma to dehydration (r = 0.92) and both traits were negatively related to the concentration of reducing soluble sugars. No differences existed between moderate and strong drought-hardening treatments. Neither N nutrition nor drought hardening had any clear effect on seedling performance under xeric outplanting conditions. However, fertilization increased growth under mesic conditions, whereas drought hardening decreased growth. We conclude that drought hardening and N fertilization applied under typical container nursery operational conditions exert opposite effects on the physiological stress tolerance of P. pinea seedlings. While drought hardening increases overall stress tolerance, N nutrition reduces it and yet has no effect on the drought acclimation capacity of seedlings.
Xu, Xiaochao; Kim, Joshua; Laganis, Philip; Schulze, Derek; Liang, Yongguang; Zhang, Tiezhi
2011-10-01
To demonstrate the feasibility of Tetrahedron Beam Computed Tomography (TBCT) using a carbon nanotube (CNT) multiple pixel field emission x-ray (MPFEX) tube. A multiple pixel x-ray source facilitates the creation of novel x-ray imaging modalities. In a previous publication, the authors proposed a Tetrahedron Beam Computed Tomography (TBCT) imaging system which comprises a linear source array and a linear detector array that are orthogonal to each other. TBCT is expected to reduce scatter compared with Cone Beam Computed Tomography (CBCT) and to have better detector performance. Therefore, it may produce improved image quality for image guided radiotherapy. In this study, a TBCT benchtop system has been developed with an MPFEX tube. The tube has 75 CNT cold cathodes, which generate 75 x-ray focal spots on an elongated anode, and has 4 mm pixel spacing. An in-house-developed, 5-row CT detector array using silicon photodiodes and CdWO(4) scintillators was employed in the system. Hardware and software were developed for tube control and detector data acquisition. The raw data were preprocessed for beam hardening and detector response linearity and were reconstructed with an FDK-based image reconstruction algorithm. The focal spots were measured at about 1 × 2 mm(2) using a star phantom. Each cathode generates around 3 mA cathode current with 2190 V gate voltage. The benchtop system is able to perform TBCT scans with a prolonged scanning time. Images of a commercial CT phantom were successfully acquired. A prototype system was developed, and preliminary phantom images were successfully acquired. MPFEX is a promising x-ray source for TBCT. Further improvement of tube output is needed in order for it to be used in clinical TBCT systems.
NASA Astrophysics Data System (ADS)
He, Cunfu; Yang, Meng; Liu, Xiucheng; Wang, Xueqian; Wu, Bin
2017-11-01
The magnetic hysteresis behaviours of ferromagnetic materials vary with the heat treatment-induced micro-structural changes. In the study, the minor hysteresis loop measurement technique was used to quantitatively characterise the case depth in two types of medium carbon steels. Firstly, high-frequency induction quenching was applied in rod samples to increase the volume fraction of hard martensite to the soft ferrite/pearlite (or sorbite) in the sample surface. In order to determine the effective and total case depth, a complementary error function was employed to fit the measured hardness-depth profiles of induction-hardened samples. The cluster of minor hysteresis loops together with the tangential magnetic field (TMF) were recorded from all the samples and the comparative study was conducted among three kinds of magnetic parameters, which were sensitive to the variation of case depth. Compared to the parameters extracted from an individual minor loop and the distortion factor of the TMF, the magnitude of three-order harmonic of TMF was more suitable to indicate the variation in case depth. Two new minor-loop coefficients were introduced by combining two magnetic parameters with cumulative statistics of the cluster of minor-loops. The experimental results showed that the two coefficients monotonically linearly varied with the case depth within the carefully selected magnetisation region.
Coupling efficiency of laser beam to multimode fiber
NASA Astrophysics Data System (ADS)
Niu, Jinfu; Xu, Jianqiu
2007-06-01
The coupling efficiency of laser beam to multimode fiber is given by geometrical optics, and the relation between the maximum coupling efficiency and the beam propagation factor M2 is analyzed. An equivalent factor MF2 for the multimode fiber is introduced to characterize the fiber coupling capability. The coupling efficiency of laser beam to multimode fiber is calculated in respect of the ratio M2/MF2 by the overlapping integral theory. The optimal coupling efficiency can be roughly estimated by the ratio of M2 to MF2 but with a large error range. The deviation comes from the lacks of information on the detail of phase and intensity profile in the beam factor M2.
Work Hardening Behavior of 1020 Steel During Cold-Beating Simulation
NASA Astrophysics Data System (ADS)
CUI, Fengkui; LING, Yuanfei; XUE, Jinxue; LIU, Jia; LIU, Yuhui; LI, Yan
2017-03-01
The present research of cold-beating formation mainly focused on roller design and manufacture, kinematics, constitutive relation, metal flow law, thermo-mechanical coupling, surface micro-topography and microstructure evolution. However, the research on surface quality and performance of workpieces in the process of cold-beating is rare. Cold-beating simulation experiment of 1020 steel is conducted at room temperature and strain rates ranging from 2000 to 4000 s-1 base on the law of plastic forming. According to the experimental data, the model of strain hardening of 1020 steel is established, Scanning Electron Microscopy(SEM) is conducted, the mechanism of the work hardening of 1020 steel is clarified by analyzing microstructure variation of 1020 steel. It is found that the strain rate hardening effect of 1020 steel is stronger than the softening effect induced by increasing temperatures, the process of simulation cold-beating cause the grain shape of 1020 steel significant change and microstructure elongate significantly to form a fibrous tissue parallel to the direction of deformation, the higher strain rate, the more obvious grain refinement and the more hardening effect. Additionally, the change law of the work hardening rate is investigated, the relationship between dislocation density and strain, the relationship between work hardening rate and dislocation density is obtained. Results show that the change trend of the work hardening rate of 1020 steel is divided into two stages, the work hardening rate decreases dramatically in the first stage and slowly decreases in the second stage, finally tending toward zero. Dislocation density increases with increasing strain and strain rate, work hardening rate decreases with increasing dislocation density. The research results provide the basis for solving the problem of improving the surface quality and performance of workpieces under cold-beating formation of 1020 steel.
Effects of Ce additions on the age hardening response of Mg–Zn alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langelier, Brian, E-mail: langelb@mcmaster.ca; Esmaeili, Shahrzad
2015-03-15
The effects of Ce additions on the precipitation hardening behaviour of Mg–Zn are examined for a series of alloys, with Ce additions at both alloying and microalloying levels. The alloys are artificially aged, and studied using hardness measurement and X-ray diffraction, as well as optical and transmission electron microscopy. It is found that the age-hardening effect is driven by the formation of fine precipitates, the number density of which is related to the Zn content of the alloy. Conversely, the Ce content is found to slightly reduce hardening. When the alloy content of Ce is high, large secondary phase particlesmore » containing both Ce and Zn are present, and remain stable during solutionizing. These particles effectively reduce the amount of Zn available as solute for precipitation, and thereby reduce hardening. Combining hardness results with thermodynamic analysis of alloy solute levels also suggests that Ce can have a negative effect on hardening when present as solutes at the onset of ageing. This effect is confirmed by designing a pre-ageing heat treatment to preferentially remove Ce solutes, which is found to restore the hardening capability of an Mg–Zn–Ce alloy to the level of the Ce-free alloy. - Highlights: • The effects of Ce additions on precipitation in Mg–Zn alloys are examined. • Additions of Ce to Mg–Zn slightly reduce the age-hardening response. • Ce-rich secondary phase particles deplete the matrix of Zn solute. • Hardening is also decreased when Ce is present in solution. • Pre-ageing to preferentially precipitate out Ce restores hardening capabilities.« less
Template For Aiming An X-Ray Machine
NASA Technical Reports Server (NTRS)
Morphet, W. J.
1994-01-01
Relatively inexpensive template helps in aligning x-ray machine with phenolic ring to be inspected for flaws. Phenolic ring in original application part of rocket nozzle. Concept also applicable to x-ray inspection of other rings. Template contains alignment holes for adjusting orientation, plus target spot for adjusting lateral position, of laser spotting beam. (Laser spotting beam coincides with the x-ray beam, turned on later, after alignment completed.) Use of template decreases positioning time and error, providing consistent sensitivity for detection of flaws.
SU-F-T-465: Two Years of Radiotherapy Treatments Analyzed Through MLC Log Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Defoor, D; Kabat, C; Papanikolaou, N
Purpose: To present treatment statistics of a Varian Novalis Tx using more than 90,000 Varian Dynalog files collected over the past 2 years. Methods: Varian Dynalog files are recorded for every patient treated on our Varian Novalis Tx. The files are collected and analyzed daily to check interfraction agreement of treatment deliveries. This is accomplished by creating fluence maps from the data contained in the Dynalog files. From the Dynalog files we have also compiled statistics for treatment delivery times, MLC errors, gantry errors and collimator errors. Results: The mean treatment time for VMAT patients was 153 ± 86 secondsmore » while the mean treatment time for step & shoot was 256 ± 149 seconds. Patient’s treatment times showed a variation of 0.4% over there treatment course for VMAT and 0.5% for step & shoot. The average field sizes were 40 cm2 and 26 cm2 for VMAT and step & shoot respectively. VMAT beams contained and average overall leaf travel of 34.17 meters and step & shoot beams averaged less than half of that at 15.93 meters. When comparing planned and delivered fluence maps generated using the Dynalog files VMAT plans showed an average gamma passing percentage of 99.85 ± 0.47. Step & shoot plans showed an average gamma passing percentage of 97.04 ± 0.04. 5.3% of beams contained an MLC error greater than 1 mm and 2.4% had an error greater than 2mm. The mean gantry speed for VMAT plans was 1.01 degrees/s with a maximum of 6.5 degrees/s. Conclusion: Varian Dynalog files are useful for monitoring machine performance treatment parameters. The Dynalog files have shown that the performance of the Novalis Tx is consistent over the course of a patients treatment with only slight variations in patient treatment times and a low rate of MLC errors.« less
Research on SEU hardening of heterogeneous Dual-Core SoC
NASA Astrophysics Data System (ADS)
Huang, Kun; Hu, Keliu; Deng, Jun; Zhang, Tao
2017-08-01
The implementation of Single-Event Upsets (SEU) hardening has various schemes. However, some of them require a lot of human, material and financial resources. This paper proposes an easy scheme on SEU hardening for Heterogeneous Dual-core SoC (HD SoC) which contains three techniques. First, the automatic Triple Modular Redundancy (TMR) technique is adopted to harden the register heaps of the processor and the instruction-fetching module. Second, Hamming codes are used to harden the random access memory (RAM). Last, a software signature technique is applied to check the programs which are running on CPU. The scheme need not to consume additional resources, and has little influence on the performance of CPU. These technologies are very mature, easy to implement and needs low cost. According to the simulation result, the scheme can satisfy the basic demand of SEU-hardening.
NASA Astrophysics Data System (ADS)
Daneyko, O. I.; Kulaeva, N. A.; Kovalevskaya, C. A.; Kolupaeva, S. N.
2015-07-01
A mathematical model of plastic deformation of dispersion-hardened materials with an fcc matrix containing strengthening particles with an L12 superstructure having a coherent relationship with the matrix is presented. The model is based on the balance equations of deformation defects of different types with taking into account their transformation during plastic deformation. The influence of scale characteristics of the hardening phase, temperature, and deformation rate on the evolution of the dislocation subsystem and strain hardening of an alloy with an fcc matrix hardened by particles with an L12 super structure is studied. A temperature anomaly of mechanical properties is found for the materials with different fcc matrices (Al,Cu, Ni). It is shown that the temperature anomaly is more pronounced for the material with larger volume fraction of the hardening phase.
Beck, Erwin H; Heim, Richard; Hansen, Jens
2004-12-01
This introductory overview shows that cold, in particular frost, stresses a plant in manifold ways and that the plant's response, being injurious or adaptive, must be considered a syndrome rather than a single reaction. In the course of the year perennial plants of the temperate climate zones undergo frost hardening in autumn and dehardening in spring. Using Scots pine (Pinus sylvestris L.) as a model plant the environmental signals inducing frost hardening and dehardening, respectively, were investigated. Over 2 years the changes in frost resistance of Scots pine needles were recorded together with the annual courses of day-length and ambient temperature. Both act as environmental signals for frost hardening and dehardening. Climate chamber experiments showed that short day-length as a signal triggering frost hardening could be replaced by irradiation with far red light, while red light inhibited hardening. The involvement of phytochrome as a signal receptor could be corroborated by respective night-break experiments. More rapid frost hardening than by short day or far red treatment was achieved by applying a short period (6 h) of mild frost which did not exceed the plant's cold resistance. Both types of signals were independently effective but the rates of frost hardening were not additive. The maximal rate of hardening was - 0.93 degrees C per day and frost tolerance of less than < - 72 degrees C was achieved. For dehardening, temperature was an even more effective signal than day-length.